Professional Documents
Culture Documents
00
CONSOLIDATE
Contenido
Notas
https://docs.apigee.com/release/notes/45100-edge-private-cloud-
release-notes#new-features
Release summary
Supported software
Bug fixes
Security issues fixed
Known issues
Architectural Overview
Before installing Apigee Edge for Private Cloud, you should be familiar with the
overall organization of Edge modules and software components.
The following image shows how the different modules interact within Apigee
Apigee Edge Gateway
Edge Gateway is the core module of Apigee Edge and is the main tool for managing
your APIs. The Gateway UI provides tools for adding and configuring your APIs,
setting up bundles of resources, and managing developers and apps. The Gateway
offloads many common management concerns from your backend API. When you
add an API, you can apply policies for security, rate-limiting, mediation, caching, and
other controls. You can also customize the behavior of your API by applying custom
scripts, making call outs to third-party APIs, and so on.
Software Components
Edge Gateway is designed so that these may be all installed on a single host or
distributed among several hosts.
As data passes through Apigee Edge, several default types of information are
collected including URL, IP, user ID for API call information, latency, and error data.
You can use policies to add other information, such as headers, query parameters,
and portions of a request or response extracted from XML or JSON.
All data is pushed to Edge Analytics where it is maintained by the analytics server in
the background. Data aggregation tools can be used to compile various built-in or
custom reports.
Software Components
Apigee Developer Services portal (or simply, the portal) is a template portal for
content and community management. It is based on the open source Drupal project.
The default setup allows creating and managing API documentation, forums, and
blogs. A built-in test console allows testing of APIs in real time from within the
portal.
Apart from content management, the portal has various features for community
management such as manual/automatic user registration and moderating user
comments. Role-Based Access Control (RBAC) model controls the access to
features on the the portal. For example, you can enable controls to allow registered
user to create forum posts, use test consoles, and so on.
The Apigee Edge for Private Cloud deployment script does not include portal
deployment. The portal deployment on-premises is supported by its own installation
script. For more information, see Install the portal.
Edge Monetization Services is a new powerful extension to Apigee Edge for Private
Cloud. As an API provider, you need an easy-to-use and flexible way to monetize your
APIs so that you can generate revenue for the use of those APIs. Monetization
Services solves those requirements. Using Monetization Services, you can create a
variety of rate plans that charge developers for the use of your APIs bundled into
packages. The solution offers an extensive degree of flexibility: you can create pre-
paid plans, post-paid plans, fixed-fee plans, variable rate plans, freemium plans, plans
tailored to specific developers, plans covering groups of developers, and more.
You can also set limits to help control and monitor the performance of your API
packages and allow you to react accordingly, and you can set up automatic
notifications for when those limits are approached or reached.
Note: The core Apigee Edge (Gateway and Analytics) is a prerequisite for using Monetization
Services.
For more information on getting started with Monetization Services using Edge UI,
see Get started using monetization.
On-Premises deployment
An on-premises installation of core Apigee Edge for Private Cloud (Gateway and
Analytics) provides the infrastructure required to run API traffic on behalf of the on-
premises client's customers.
The following videos introduce you to the deployment models for Apigee Edge for
Private Cloud:
The following video gives you high-level sizing guidance for your installation:
For all installation scenarios described in Installation topologies, the following tables
list the minimum hardware requirements for the installation components.
In these tables the hard disk requirements are in addition to the hard disk space
required by the operating system. Depending on your applications and network
traffic, your installation might require more or fewer resources than listed below.
Minimum hard
Installation Component RAM CPU
disk
Cassandra 16GB 8-core 250GB local
storage with
SSD
supporting
2000 IOPS
● Less than 250 TPS: 8GB, 4-core can be considered with managed network storage*** supporting 1000 IOPS or higher
● Greater than 250 TPS: 16GB, 8-core, managed network storage*** supporting 1000 IOPS or higher
● Greater than 1000 TPS: 16GB, 8-core, managed network storage*** supporting 2000 IOPS or higher
● Greater than 2000 TPS: 32GB, 16-core, managed network storage*** supporting 2000 IOPS or higher
● Greater than 4000 TPS: 64GB, 32-core, managed network storage*** supporting 4000 IOPS or higher
** The Postgres hard disk value is based on the out of the box analytics captured by Edge. If you add custom values to the analytics data, then
these values should be increased accordingly. Use the following formula to estimate the required storage:
(requests/second) *
(seconds/hour) *
(days/month) *
For example:
(2K bytes) * (100 req/sec) * (3600 secs/hr) * (18 peak hours/day) * (30 days/month) * (3 months
retention)
● It gives the ability to dynamically scale up the storage size if and when required.
● Network IOPS can be adjusted on the fly in most of today's environment/Storage/Network subsystems.
● Storage level snapshots can be enabled as part of backup and recovery solutions.
In addition, the following lists the hardware requirements if you want to install the
Monetization Services (not supported on All-in-One installation):
Note:
● If the root file system is not large enough for the installation, Apigee recommends that
you place the data onto a larger disk.
● If an older version of Apigee Edge for Private Cloud was installed on the machine, ensure
that you delete the /tmp/java directory before a new installation.
● The system-wide temporary folder /tmp needs execute permissions in order to start
Cassandra.
● If user 'apigee' was created prior to the installation, ensure that /home/apigee exists as
home directory and is owned by apigee:apigee.
Java
You need a supported version of Java 1.8 installed on each machine prior to the
installation. Supported JDKs are listed in Supported software and supported
versions.
Ensure that the JAVA_HOME environment variable points to the root of the JDK for
the user performing the installation.
Note: The Edge installer can install Java 1.8 and set JAVA_HOME for you as part of the apigee-
setup installation process. See Install the Edge apigee-setup utility for more.
SELinux
Depending on your settings for SELinux, Edge can encounter issues with installing
and starting Edge components. If necessary, you can disable SELinux or set it to
permissive mode during installation, and then re-enable it after installation. See
Install the Edge apigee-setup utility for more.
Installation directory
By default, the installer writes all files to the /opt/apigee directory. You cannot
change this directory location. While you cannot change this directory, you can
create a symlink to map /opt/apigee to another location, as described in Creating
a symlink from /opt/apigee.
Before you create the symlink, you must first create a user and group named
"apigee". This is the same group and user created by the Edge installer.
Network setting
Apigee recommendeds that you check the network setting prior to the installation.
The installer expects that all machines have fixed IP addresses. Use the following
commands to validate the setting:
Depending on your operating system type and version, you may need to edit
/etc/hosts and /etc/sysconfig/network if the hostname is not set correctly.
See the documentation for your specific operating system for more information.
If a server has multiple interface cards, the "hostname -i" command returns a space-
separated list of IP addresses. By default, the Edge installer uses the first IP address
returned, which might not be correct in all situations. As an alternative, you can set
the following property in the installation configuration file:
ENABLE_DYNAMIC_HOSTIP=y
With that property set to "y", the installer prompts you to select the IP address to use
as part of the install. The default value is "n". See Edge Configuration File Reference
for more.
Warning: If you set ENABLE_DYNAMIC_HOSTIP=y, ensure that your property file does not set
HOSTIP.
TCP Wrappers
TCP Wrappers can block communication of some ports and can affect OpenLDAP,
Postgres, and Cassandra installation. On those nodes, check /etc/hosts.allow
and /etc/hosts.deny to ensure that there are no port restrictions on the required
OpenLDAP, Postgres, and Cassandra ports.
iptables
Validate that there are no iptables policies preventing connectivity between nodes on
the required Edge ports. If necessary, you can stop iptables during installation using
the command:
sudo/etc/init.d/iptables stop
On CentOS 7.x:
Directory access
The following table lists directories on Edge nodes that have special requirements
from Edge processes:
Cassandra
All Cassandra nodes must be connected to a ring. Cassandra stores data replicas on
multiple nodes to ensure reliability and fault tolerance. The replication strategy for
each Edge keyspace determines the Cassandra nodes where replicas are placed. For
more, see About Cassandra replication factor and consistency level.
Cassandra automatically adjusts its Java heap size based on the available memory.
For more, see Tuning Java resources in the event of a performance degradation or
high memory consumption.
After installing the Edge for Private Cloud, you can check that Cassandra is
configured correctly by examining the
/opt/apigee/apigee-cassandra/conf/cassandra.yaml file. For example,
ensure that the Edge for Private Cloud installation script set the following properties:
● cluster_name
● initial_token
● partitioner
● seeds
● listen_address
● rpc_address
● snitch
PostgreSQL database
After you install Edge, you can adjust the following PostgreSQL database settings
based on the amount of RAM available on your system:
Note: These settings assume that the PostgreSQL database is only used for Edge analytics, and
not for any other purpose.
System limits
Ensure that you have set the following system limits on Cassandra and Message
Processor nodes:
● On Cassandra nodes, set soft and hard memlock, nofile, and address space
(as) limits for installation user (default is "apigee") in
/etc/security/limits.d/90-apigee-edge-limits.conf as shown
below:
● apigee soft memlock unlimited
● apigee hard memlock unlimited
● apigee soft nofile 32768
apigee hard nofile 65536
apigee soft as unlimited
apigee hard as unlimited
apigee soft nproc 32768
apigee hard nproc 65536
● For more information, see Recommended production settings in the Apache
Cassandra documentation.
● On Message Processor nodes, set the maximum number of open file
descriptors to 64K in /etc/security/limits.d/90-apigee-edge-
limits.conf as shown below:
● apigee soft nofile 32768
● apigee hard nofile 65536
● If necessary, you can raise that limit. For example, if you have a large number
of temporary files open at any one time.
● If you ever see the following error in a Router or Message Processor
system.log, your file descriptor limits may be set too low:
● "java.io.IOException: Too many open files"
● You can check your user limits by running:
# su - apigee
$ ulimit -n
100000
●
● If you are still reaching the open files limits after setting the file descriptor
limits to 100000, open a ticket with Apigee Edge Support for further
troubleshooting.
To update NSS:
AWS AMI
If you are installing Edge on an AWS Amazon Machine Image (AMI) for Red Hat
Enterprise Linux 7.x, you must first run the following command:
Tools
The installer uses the following UNIX tools in the standard version as provided by
EL5 or EL6.
Note:
● The executable for the useradd tool is located in /usr/sbin and for chkconfig in
/sbin.
● With sudo access you can gain access over the environment of the calling user, for
example, usually you would call sudo command or sudo
PATH=$PATH:/usr/sbin:/sbin command.
● Ensure that you have patch installed prior to a service pack (patch) installation.
ntpdate
Apigee recommends that your servers' times are synchronized. If not already
configured, the ntpdate utility could serve this purpose, which verifies whether
servers are time synchronized. You can use yum install ntp to install the utility.
This is particularly useful for replicating OpenLDAP setups. Note that you set up
server time zone in UTC.
openldap 2.4
The on-premises installation requires OpenLDAP 2.4. If your server has an Internet
connection, then the Edge install script downloads and installs OpenLDAP. If your
server does not have an Internet connection, you must ensure that OpenLDAP is
already installed before running the Edge install script. On RHEL/CentOS, you can run
yum install openldap-clients openldap-servers to install the
OpenLDAP.
For 13-host installations, and 12-host installations with two data centers, OpenLDAP
replication is required because there are multiple nodes hosting OpenLDAP.
A router in a VM can expose multiple virtual hosts (as long as they differ from one
another in their host alias or in their interface port).
Just as a naming example, a single physical server A might be running two VMs,
named "VM1" and "VM2". Let's assume "VM1" exposes a virtual Ethernet interface,
which gets named "eth0" inside the VM, and which is assigned IP address
111.111.111.111 by the virtualization machinery or a network DHCP server; and
then assume VM2 exposes a virtual Ethernet interface also named "eth0" and it gets
assigned an IP address 111.111.111.222.
We might have an Apigee router running in each of the two VMs. The routers expose
virtual host endpoints as in this hypothetical example:
The Apigee router in VM1 exposes three virtual hosts on its eth0 interface (which
has some specific IP address), api.mycompany.com:80,
api.mycompany.com:443, and test.mycompany.com:80.
The router in VM2 exposes api.mycompany.com:80 (same name and port as
exposed by VM1).
The physical host's operating system might have a network firewall; if so, that
firewall must be configured to pass TCP traffic bound for the ports being exposed on
the virtualized interfaces (111.111.111.111:{80, 443} and
111.111.111.222:80). In addition, each VM's operating system may provide its
own firewall on its eth0 interface and these too must allow ports 80 and 443 traffic to
connect.
The basepath is the third component involved in routing API calls to different API
proxies that you may have deployed. API proxy bundles can share an endpoint if they
have different basepaths. For example, one basepath can be defined as
http://api.mycompany.com:80/ and another defined as
http://api.mycompany.com:80/salesdemo.
In this case, you need a load balancer or traffic director of some kind splitting the
http://api.mycompany.com:80/ traffic between the two IP addresses
(111.111.111.111 on VM1 and 111.111.111.222 on VM2). This function is
specific to your particular installation, and is configured by your local networking
group.
The basepath is set when you deploy an API. From the above example, you can
deploy two APIs, mycompany and testmycompany, for the organization
mycompany-org with the virtual host that has the host alias of
api.mycompany.com and the port set to 80. If you do not declare a basepath in the
deployment, then the router does not know which API to send incoming requests to.
However, if you deploy the API testmycompany with the base URL of /salesdemo,
then users access that API using http://api.mycompany.com:80/salesdemo.
If you deploy your API mycompany with the base URL of / then your users access
the API by the URL http://api.mycompany.com:80/.
Licensing
Each installation of Edge requires a unique license file that you obtain from Apigee.
You will need to provide the path to the license file when installing the management
server, for example /tmp/license.txt.
If license file is valid, the management server validates the expiry and allowed
Message Processor (MP) count. If any of the license settings is expired, you can find
the logs in the following location: /opt/apigee/var/log/edge-management-
server/logs. In this case you can contact Apigee Edge Support for migration
details.
If you do not yet have a license, contact Apigee Sales.
Port requirements
The need to manage the firewall goes beyond just the virtual hosts; both VM and
physical host firewalls must allow traffic for the ports required by the components to
communicate with each other.
Port diagrams
The following images show the port requirements for both a single data center and
multiple data center configuration:
The following image shows the port requirements for each Edge component in a
single data center configuration:
If you install the 12-node clustered configuration with two data centers, ensure that
the nodes in the two data centers can communicate over the ports shown below:
SINGLE
Notes on this diagram:
● The ports prefixed by "M" are ports used to manage the component and must
be open on the component for access by the Management Server.
● The Edge UI requires access to the Router, on the ports exposed by API
proxies, to support the Send button in the trace tool.
● Access to JMX ports can be configured to require a username/password. See
How to Monitor for more information.
● You can optionally configure TLS/SSL access for certain connections, which
can use different ports. See TLS/SSL for more.
● You can configure the Management Server and Edge UI to send emails
through an external SMTP server. If you do, you must ensure that the
Management Server and UI can access the necessary port on the SMTP
server (not shown). For non-TLS SMTP, the port number is typically 25. For
TLS-enabled SMTP, it is often 465, but check with your SMTP provider.
MULTI
Note that:
● All Management Servers must be able to access all Cassandra nodes in all
other data centers.
● All Message Processors in all data centers must all be able to access each
other over port 4528.
● The Management Server must be able to access all Message Processors over
port 8082.
● All Management Servers and all Qpid nodes must be able to access Postgres
in all other data centers.
● For security reasons, other than the ports shown above and any others
required by your own network requirements, no other ports should be open
between data centers.
By default, the communications among components is not encrypted. You can add
encryption by installing Apigee mTLS. For more information, see Introduction to
Apigee mTLS.
Port details
The table below describes the ports that need to be opened in firewalls, by Edge
component:
Filter by port # 22 80 443 1099 1100 1101
1102 1103 2181 2888 3888 4526 4527
4528 4529 4530 5432 5636 5672 7000 7199
8080 8081 8082 8083 8084 8443 9000
9042 9099 9160 10389 15999 59001 59002
FILTER BY PORT #
Standard HTTP 80, 443 HTTP plus any other ports you use for virtual
ports hosts
curl -v
http://routerIP:15999/v1/servers/self/reachable
If the Router is reachable, the request returns
HTTP 200.
The next table shows the same ports, listed numerically, with the source and
destination components:
virtual_host_p HTTP plus any External client (or load Listener on Message
ort other ports you balancer) Router
use for virtual
host API call
traffic. Ports
80 and 443 are
most
commonly
used; the
Message
Router can
terminate
TLS/SSL
connections.
To prevent this situation, Apigee recommends that the Cassandra server, Message
Processor, and Routers be in the same subnet so that a firewall is not involved in the
deployment of these components.
If a firewall is between the Router and Message Processors, and has an idle TCP
timeout set, our recommendations is to do the following:
Installation topologies
This section describes the Edge installation topologies (i.e., the node configurations
supported by Edge).
Introductory video
Installation
Topology Nodes
Instructions
Non-production Topologies
Production topologies
* These configurations do not include the Apigee Developer Services portal (or simply, the portal). For more information, see
Portal overview.
For Apigee Edge for Private Cloud, to plan out custom topology requirements, please
contact your Sales Rep or Account Executive to engage with Apigee Edge
Professional Services and Customer Success. Performance varies by use case, so
we recommend setting up a performance environment to analyze and tune your
performance before rolling changes out to your production environment.
Additional notes:
NOTE: This scenario combines cluster and Gateway components to reduce the number of
servers used.
To achieve optimal performance, the cluster can also be deployed on three different servers. This
scenario also introduces a master-standby replication between two Postgres nodes if analytics
statistics are mission critical.
Additional notes:
NOTE: This scenario introduces a master-standby replication between two Postgres nodes if
Analytics statistics are mission critical.
Additional notes:
● While this topology represents the minimum number of nodes for a production
installation, it is not optimal for performance.
● You can expand this topology to support high availability, as described in
Adding a data center.
● In this topology, Routers and Message Processors are hosted on the same
nodes and may result in “noisy neighbor” problems.
● This topology is limited to a 3-node Cassandra setup with a quorum of two.
As a result, your ability to take a node out for maintenance is limited.
NOTE: This scenario uses master-master OpenLDAP replication and master-standby Postgres
replication in one datacenter setup.
Additional notes:
NOTE: This scenario uses master-master OpenLDAP replication and master-standby Postgres
replication (across two data centers).
Additional notes:
The following videos introduce you to the requirements and process for installing a
single-node Edge installation:
A full production installation of Edge requires multiple machines. See Installation topologies for
examples of production installations.
After you install Edge for the Private Cloud on the host machine, you can optionally
install Apigee Developer Services portal (or simply, the portal) on its own host
machine.
Licensing
Each installation of Edge requires a unique license file that you obtain from Apigee. If
you do not yet have a license, contact Sales here.
Requiremen
Description Test
t
● If you are a
prospect
evaluating Apigee,
please contact
Apigee Sales here.
● If you are an
existing Apigee
customer, please
contact your
Apigee
representative.
iptables -nvL
setenforce 0
setenforce 1
Edge Ensure that you have See System requirements for Edge
installed on already installed Edge on above.
the host the host machine
● username = 'cassandra'
● password = 'cassandra'
You can use this account, set a different password for this account, or create a new
Cassandra user. Add, remove, and modify users by using the Cassandra
CREATE/ALTER/DROP USER statements.
NOTE: Use this procedure when installing Cassandra by using the "-p c", "-p ds", "-p sa", "-p aio", "-
p asa", and "-p ebp" options.
● Management Server
● Message Processors
● Routers
● Qpid servers
● Postgres servers
Therefore, when you install these components, you must set the following properties
in the configuration file to specify the Cassandra credentials:
CASS_USERNAME=cassandra
CASS_PASSWORD=cassandra
You can change the Cassandra credentials after installing Cassandra. However, if
you have already installed the Management Server, Message Processors, Routers,
Qpid servers, or Postgres servers, you must also update those components to use
the new credentials.
1. Log into any one Cassandra node using the cqlsh tool and the default
credentials. You only have to change the password on one node and it will be
broadcast to all Cassandra nodes in the ring:
2. /opt/apigee/apigee-cassandra/bin/cqlsh cassIP 9042 -u cassandra -p
cassandra
3. Where:
a. cassIP is the IP address of the Cassandra node.
b. 9042 is the default Cassandra port.
c. The default user is cassandra.
d. The default password is cassandra. If you changed the password
previously, use the current password. If the password contains any
special characters, wrap it in single quotes.
4. Execute the following command at the cqlsh> prompt to update the
password:
5. ALTER USER cassandra WITH PASSWORD 'NEW_PASSWORD';
6. Exit the cqlsh tool, as the following example shows:
7. exit
8. If you have not yet installed the Management Server, Message Processors,
Routers, Qpid servers, or Postgres servers, set the following properties in the
config file and then install those components:
CASS_USERNAME=cassandra
9. CASS_PASSWORD=NEW_PASSWORD
10. If you have already installed the Management Server, Message Processors,
Routers, Qpid servers, or Postgres servers, then see Resetting Edge
Passwords for the procedure to update those components to use the new
password.
● Update all Edge components that connect to Cassandra with the Cassandra
username and password.
● Enable authentication on all Cassandra nodes, and set the Cassandra
username and password on any one node. You only have to change the
credentials on one Cassandra node and they will be broadcast to all
Cassandra nodes in the ring.
Use the following procedure to update all Edge components that communicate with
Cassandra with the new credentials. Note that you do this step before you actually
update the Cassandra credentials:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server
/opt/apigee/apigee-service/bin/apigee-service edge-router
Enable authentication
Use the following procedure to enable Cassandra authentication and set the
username and password:
IP1=192.168.1.1
IP2=192.168.1.2
IP3=192.168.1.3
HOSTIP=$(hostname -i)
# Cassandra port
If the primary node ever fails, you can promote the standby server to the primary. See
Handling a PostgresSQL Database Failover for more information.
Note: Primary-standby replication is not supported for the Developer Services portal. The portal
only supports a single Postgres node.
Note: Apigee strongly recommends that you use IP addresses rather than host names for the
PG_MASTER and PG_STANDBY properties in your silent configuration file. In addition, you should
be consistent on both nodes.
If you use host names rather than IP addresses, you must be sure sure that the host names
properly resolve using DNS.
The installer automatically configures the two Postgres node to function as primary-
standby with replication.
1. Identify which Postgre node will be the primary and which will be the standby
server.
2. On the primary node, edit the config file to set:
PG_MASTER=IP_OR_DNS_OF_NEW_PRIMARY
3. PG_STANDBY=IPorDNSofNewStandby
4. Enable replication on the new primary:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-master -f configFile
6. On the standby node, edit the config file to set:
PG_MASTER=IP_OR_DNS_OF_NEW_PRIMARY
7. PG_STANDBY=IPorDNSofNewStandby
8. Stop the standby node:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
10. On the standby node, delete any existing Postgres data:
11. rm -rf /opt/apigee/data/apigee-postgresql/
12. Note: If necessary, you can backup this data before deleting it.
13. Configure the standby node:
14. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-standby -f configFile
Note: This page describes how to configure a virtual host that supports the HTTP protocol so
that you can get an Edge installation up and running quickly. To define a virtual host that
supports the HTTPS protocol over TLS, see Configuring virtual hosts for the Private Cloud.
When you create the virtual host, you must specify the following information:
● The name of the virtual host that you use to reference it in your API proxies.
● The port on the Router for the virtual host. Typically these ports start at 9001
and increment by one for every new virtual host.
● The host alias of the virtual host. Typically the DNS name of the virtual host.
For example, in a config file passed to the setup-org command, you can specify
this information as:
The Edge Router compares the Host header of the incoming request to the list of
available host aliases as part of determining the API proxy that handles the request.
When making a request through a virtual host, either specify a domain name that
matches the host alias of a virtual host, or specify the IP address of the Router and
the Host header containing the host alias.
For example, if you created a virtual host with a host alias of myapis.apigee.net on
port 9001, then a cURL request to an API through that virtual host could use one of
the following forms:
Options when you do not have a DNS entry for the virtual
host
One option when you do not have a DNS entry is to set the host alias to the IP
address of the Router and port of the virtual host, as routerIP:port. For example:
VHOST_ALIAS=192.168.1.31:9001
curl http://routerIP:9001/proxy-base-path/resource-path
This option is preferred because it works well with the Edge UI.
If you have multiple Routers, add a host alias for each Router, specifying the IP
address of each Router and port of the virtual host:
# Specify the IP and port of each router as a space-separated list enclosed in quotes:
# VHOST_ALIAS="192.168.1.31:9001 192.168.1.32:9001"
Alternatively, you can set the host alias to a value, such as temp.hostalias.com.
Then, you have to pass the Host header on every request:
Or, add the host alias to your /etc/hosts file. For example, add this line to
/etc/hosts:
192.168.1.31 temp.hostalias.com
NOTE: The instructions in this section are for new installations only. Migrating to a rack aware
Cassandra configuration from an existing installation is not currently documented or supported.
For more information on why making your Cassandra ring rack aware is important,
see the following resources:
What is a rack?
A Cassandra rack is a logical grouping of Cassandra nodes within the ring.
Cassandra uses racks so that it can ensure replicas are distributed among different
logical groupings. As a result, operations are sent to not just one node, but multiple
nodes, each on a separate rack, providing greater fault tolerance and availability.
The examples in this section use three Cassandra racks, which is the number of
racks that are supported by Apigee in production topologies.
The default installation of Cassandra in Apigee Edge for Private Cloud assumes a
single logical rack and places all the nodes in a data center within it. Although this
configuration is simple to install and manage, it is susceptible to failure if an
operation fails on one of those nodes.
The following image shows the default configuration of the Cassandra ring:
(Figure 1) Default configuration: All nodes on a single rack
In a more robust configuration, each node would be assigned to a separate rack and
operations would also execute on replicas on each of those racks.
The following image shows a 3-node ring. This image shows the order in which
operations are replicated across the ring (clockwise) and highlights the fact that no
two nodes are on the same rack:
In this configuration, operations are sent to a node but are also sent to replicas of
that node on other racks (in clockwise order).
1. Before you run the installer, log into the Cassandra node and open the
following silent configuration file for edit:
2. /opt/silent.conf
3. Create the file if it does not exist and be sure to make the "apigee" user an
owner.
4. Edit the CASS_HOSTS property, a space-separated list of IP addresses (not
DNS or hostname entries) that uses the following syntax:
5. CASS_HOSTS="IP_address:data_center_number,rack_number [...]"
6. The default value is a three node Cassandra ring with each node assigned to
rack 1 and data center 1, as the following example shows:
7. CASS_HOSTS="IP1:1,1 IP2:1,1 IP3:1,1"
8. Change the rack assignments so that node 2 is assigned to rack 2 and node 3
is assigned to rack 3, as the following example shows:
9. CASS_HOSTS="IP1:1,1 IP2:1,2 IP3:1,3"
10. By changing the rack assignments, you instruct Cassandra to create two
additional logical groupings (racks), which then provide replicas that receive
all operations received by the first node.
For more information on using the CASS_HOSTS configuraion property, see
Edge Configuration File Reference.
11. Save your changes to the configuration file and execute the following
command to install Cassandra with your updated configuration:
12. /opt/apigee/apigee-setup/bin/setup.sh -p c -f path/to/silent/config
13. For example:
14. /opt/apigee/apigee-setup/bin/setup.sh -p c -f /opt/silent.conf
15. Repeat this procedure for each Cassandra node in the ring, in the order in
which the nodes are assigned in the CASS_HOSTS property. In this case, you
must install Cassandra in the following order:
a. Node 1 (IP1)
b. Node 2 (IP2)
c. Node 3 (IP3)
/opt/apigee/apigee-cassandra/bin/nodetool status
Datacenter: dc-1
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
If you enabled JMX authentication for Cassandra, you must also pass your username
and password to nodetool. For more information, see Use nodetool to manage
cluster nodes.
The following image shows the order in which operations are replicated across the
ring (clockwise) and highlights the fact that during replication, no two adjacent
nodes are on the same rack:
(Figure 3) 6-node
Cassandra ring: Two nodes on each of three racks
In this configuration, each node has two more replicas: one in each of the other two
racks. For example, node 1 in rack 1 has a replica in Rack 2 and Rack 3. Operations
sent to node 1 are also sent to the replicas in the other racks, in clockwise order.
As with a three-node ring, you must install Cassandra in the same order in which the
nodes appear in the CASS_HOSTS property:
1. Node 1 (IP1)
2. Node 4 (IP4)*
3. Node 2 (IP2)
4. Node 5 (IP5)
5. Node 3 (IP3)
6. Node 6 (IP6)
* Make your changes in the silent configuration file before running the setup utility on
the fourth node (the second machine in the Cassandra installation order).
NOTE: The ordering and placement of nodes is very important. If done incorrectly, you might end
up with a ring that is imbalanced or has hotspots that do not provide the desired amount of fault
tolerance.
Expand to 12 nodes
To further increase fault tolerance and availability, you can increase the number of
Cassandra nodes in the ring from six to 12. This configuration requires an additional
six nodes (IP7 through IP12).
The following image shows the order in which operations are replicated across the
ring (clockwise) and highlights the fact that during replication, no two adjacent
nodes are on the same rack:
(Figure
4) 12-node Cassandra ring: Four nodes on each of three racks
The procedure for installing a 12-node ring is similar to installing a three or six node
ring: set CASS_HOSTS to the given values and run the installer in the specified order.
To expand to a 12-node Cassandra ring, configure the nodes in the following way in
your silent configuration file:
As with a three- and six-node rings, you must execute the installer on nodes in the
order in which the nodes appear in the configuration file:
1. Node 1 (IP1)
2. Node 7 (IP7)*
3. Node 4 (IP4)
4. Node 8 (IP8)
5. Node 2 (IP2)
6. Node 9 (IP9)
7. Node 5 (IP5)
8. Node 10 (IP10)
9. Node 3 (IP3)
10. Node 11 (IP11)
11. Node 6 (IP6)
12. Node 12 (IP12)
* You must make these changes before installing Apigee Edge for Private Cloud on
the 7th node (the second machine in the Cassandra installation order).
NOTE: As with the three-node and six-node rings, the ordering and placement of nodes is very
important. If done incorrectly, you might end up with a ring that is imbalanced or has hotspots
that do not provide the desired amount of fault tolerance.
Installation process
Installing Edge on a node is a multi-step process:
1. Disable SELinux on the node or set it to permissive mode. See Install the Edge
apigee-setup utility for more.
2. Decide if you want to enable Cassandra authentication.
3. Decide if you want to set up master-standby replication for Postgres.
4. Select your Edge configuration from the list of recommended topologies. For
example, you can install Edge on a single node for testing, or on 13 nodes for
production. See Installation Topologies for more.
5. On each node in your selected topology, install the Edge apigee-setup
utility:
● Download the Edge bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh.
● Install the Edge apigee-service utility and dependencies.
● Install the Edge apigee-setup utility and dependencies.
See Install the Edge apigee-setup utility for more.
6. Use the apigee-setup utility to install one or more Edge components on
each node based on your selected topology.
See Install Edge components on a node.
7. On the Management Server node, use the apigee-setup utility to install
apigee-provision, the utilities that you use to create and manage Edge
organizations.
See Onboard an organization for more.
8. Restart the Classic UI component on each node after the installation is
complete, as the following example shows:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
10. (Recommended) After you complete the initial installation, Apigee
recommends that you install the new Edge UI (whose component name is
edge-management-ui), which is an enhanced user interface for developers
and administrators of Apigee Edge for Private Cloud.
For more information, see Install the new Edge UI.
After the installation is complete, check out this list of common post-installation
actions.
Note: If you are installing Edge on Red Hat Enterprise Linux (RHEL) 8.x, you must first disable the
PostgreSQL and NGINX modules in order to use the Edge apigee-setup utility. See Prerequiste:
Disable the PostgreSQL and NGINX modules.
Any user who wants to run the following commands or scripts must either be root, or
be a user with full sudo access:
● apigee-service utility:
● apigee-service commands: install, uninstall, update.
● apigee-all commands: install, uninstall, update.
● setup.sh script to install Edge components (Unless you have already used
"apigee-service install" to install the required RPMs. Then root or full
sudo access if not required.)
● update.sh script to update Edge components
Also, the Edge installer creates a new user on your system, named "apigee". Many
Edge commands invoke sudo to run as the "apigee" user.
Any user who wants to run all other commands than the ones shown above must be
a user with full sudo access to the "apigee" user. These commands include:
To configure a user to have full sudo access to the "apigee" user, use the "visudo"
command to edit the sudoers file to add:
Any files or resources used by the Edge commands must be accessible to the
"apigee" user. This includes the Edge license file and any config files.
NOTE: You can set the RUN_USER property for an Edge component to specify a different user
than "apigee". If you do, then all of the Edge commands for that component invoke sudo to run as
that user. Files or resources must then be accessible to that user.
When creating a configuration file, you can change its owner to "apigee:apigee" to
ensure that it is accessible to Edge commands:
While it is simplest to perform the entire Edge install process as root or by a user
that has full sudo access, that is not always possible. Instead, you can separate the
process into tasks performed by root and tasks performed by a user with full sudo
access to the "apigee" user.
curl https://software.apigee.com/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh
After installing or upgrading, be sure to restart the Edge UI component on each node
on which it is running.
Note: However, there is another advantage to using a local repo, even for nodes with an external
internet connection. When you install Edge from the Apigee public repository, you always install
the latest Edge RPMs. Therefore, if you want to download and store Edge RPMs for a specific
version of Edge, then you should create a local repo for that Edge version. You can then use that
local repo to perform installations for any version of Edge.
For example, you first use the local repo to install an Edge development environment. Then, when
you are ready to move to a production environment, you again install Edge from the local repo. By
installing from the local repo, you guarantee that your development and production environments
match.
A mirrored repo is very flexible. For example, you can create a mirrored repo from the latest Edge
RPMs or from a specific version of Edge. After you create the repo, you can also update it to add
RPMs from difference Edge versions. See Install the Edge apigee-setup utility for more.
If you are performing an installation on a machine with internet access, the node can
download the necessary RPMs and dependencies. However, if you are installing
from a node without internet access, you typically set up an internal repo containing
all necessary dependencies. The only way to guarantee that all dependencies are
included in your local repo is to attempt an installation, identify any missing
dependencies, and copy them to the local repo until the installation succeeds.
Note: You cannot change this directory location. However, you can create a symlink to map it to
a different location. See Install the Edge apigee-setup utility for more.
In this guide and in the Edge Operations Guide, the root installation directory is noted
as:
/opt/apigee
The installation uses the following file system structure to deploy Apigee Edge for
Private Cloud.
Log Files
The log file for apigee-setup and the setup.sh script is written to /tmp/setup-
root.log.
The log files for each component are contained in the /opt/apigee/var/log
directory. Each component has its own subdirectory. For example, the logs for the
Management Server are in the directory:
/opt/apigee/var/log/edge-management-server
Component Location
Router /opt/apigee/var/log/edge-
router
/opt/apigee/var/log/edge-
router/nginx
/opt/nginx/logs
ZooKeeper /opt/apigee/var/log/apigee-
zookeeper
OpenLDAP /opt/apigee/var/log/apigee-
openldap
Cassandra /opt/apigee/var/log/apigee-
cassandra/system.log
Qpidd /opt/apigee/var/log/apigee-
qpidd
apigee-monit /opt/apigee/var/log/apigee-
monit
Data
Component Location
Router /opt/apigee/data/edge-router
ZooKeeper /opt/apigee/data/apigee-zookeeper
OpenLDAP /opt/apigee/data/apigee-openldap
Cassandra /opt/apigee/data/apigee-
cassandra/data
Qpidd /opt/apigee/data/apigee-qpid/data
PostgreSQL database /opt/apigee/data/apigee-
postgres/pgdata
apigee-monit /opt/apigee/data/apigee-monit
ENABLE_SYSTEM_CHECK=y
If you set this property to "y", the installer checks that the system meets the CPU and
memory requirements for the component being installed. The default value is "n" to
disable the check.
● For Red Hat Enterprise Linux (RHEL) 8.0, see Prerequisites for RHEL 8.
● For Red Hat/CentOS/Oracle 7.x:
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
1. Obtain the username and password from Apigee that you use to access the
Apigee repository. If you have an existing username:password for the Apigee
ftp site, you can use those credentials.
2. Log in to your node as root to install the Edge RPMs
NOTE: While RPM installation requires root access, you can perform Edge configuration
without root access.
3. Install yum-utils and yum-plugin-priorities:
Troubleshooting
When attempting to install on a node with an external internet connection, you might
encounter one or more of the following errors:
The following table lists some possible resolutions for these errors:
nc -v software.apigee.com 443
curl -i -u username:password
https://software.apigee.com/apigee-repo.rpm
Proxy issues Your local configuration uses an egress HTTP proxy and you
have not extended the same configuration to the yum package
manager. Check your environment variables:
echo $http_proxy
echo $https_proxy
For an egress HTTP proxy, you should use one of the following
options:
NOTE: Apigee does not host all third-party dependencies in our public repositories. You must
download and install these dependencies from publicly accessible repositories.
The Apigee Edge installation process for nodes with no internet connections requires
access to the following local repos:
To create the internal Apigee repository, you do require a node with an external
internet access to be able to download the Edge RPMs and dependencies. Once you
have created the internal repo, you can then move it to another node or make that
node accessible to the Edge nodes for installation.
After you create a local Apigee repository, you might later have to update it with the
latest Edge release files. The following sections describe how to create a local
Apigee repository, and how to update it.
1. Obtain the username and password from Apigee that you use to access the
Apigee repository. If you have an existing username:password for the Apigee
ftp site, you can use those credentials.
2. Log in to your node as root to install the Edge RPMs.
NOTE: While RPM installation requires root access, you can perform Edge configuration
without root access.
3. Disable SELinux as described above.
4. Download the Edge bootstrap_4.51.00.sh file to
/tmp/bootstrap_4.51.00.sh:
5. curl https://software.apigee.com/bootstrap_4.51.00.sh -o
/tmp/bootstrap_4.51.00.sh
6. Install the Edge apigee-service utility and dependencies:
You have two options for installing Edge from the local repo. You can either:
● Create a .tar file of the repo, copy the .tar file to a node, and then install Edge
from the .tar file.
● Install a webserver on the node with the local repo so that other nodes can
access it. Apigee provides the NGINX webserver for you to use, or you can use
your own webserver.
Install from the .tar file
1. On the node with the local repo, use the following command to package the
local repo into a single .tar file named /opt/apigee/data/apigee-
mirror/apigee-4.51.00.tar.gz:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror package
3. Copy the .tar file to the node where you want to install Edge. For example,
copy it to the /tmp directory on the new node.
4. On the new node, disable SELinux as described above.
5. On the new node, be sure that you can access the local Yum utility repo and
the EPEL repo.
6. Double check that all external internet repos are disabled (this should be the
case because you are installing on a machine without internet access):
7. sudo yum repolist
8. All external repos should be disabled, but the local Apigee repo and your
internal repos should be enabled.
9. On the new node, install yum-utils and yum-plugin-priorities from
your local repo:
conf_apigee_mirror_listen_port=3939
c. conf_apigee_mirror_server_name=localhost
d. Restart NGINX:
e. /opt/nginx/scripts/apigee-nginx restart
4. By default, the repo requires a username:password of admin:admin. To
change these credentials, set the following environment variables:
MIRROR_USERNAME=uName
5. MIRROR_PASSWORD=pWord
6. On the new node, install yum-utils and yum-plugin-priorities:
To update the repo, you must download the latest bootstrap_4.51.00.sh file and then
perform a new sync.
If you have to maintain installations for Edge 4.16.0x or 4.17.0x in a 4.51.00 repo,
you can maintain a repo that contains all versions. From that repo, you can then
install any version of Edge.
1. Ensure that you have installed the 4.51.00 version of the apigee-mirror
utility:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-mirror version
3. You should see a result in the form below, where xyz is the build number:
4. apigee-mirror-4.51.00-0.0.xyz
5. Use the apigee-mirror utility to download Edge 4.16.0x/4.17.0x to your
repo. Notice how you prefix the command with the desired version:
6. apigeereleasever=4.17.01 /opt/apigee/apigee-service/bin/apigee-service
apigee-mirror sync --only-new-rpms
7. Use this same command to later update the 4.16.0x/4.17.0x repos by
specifying the required version numbers.
8. Examine the /opt/apigee/data/apigee-mirror/repos directory to see
the file structure:
9. ls /opt/apigee/data/apigee-mirror/repos
10. You should see the following files and directories:
apigee
apigee-repo-1.0-6.x86_64.rpm
bootstrap_4.16.01.sh
bootstrap_4.16.05.sh
bootstrap_4.17.01.sh
bootstrap_4.17.05.sh
bootstrap_4.17.09.sh
bootstrap_4.18.01.sh
bootstrap_4.18.05.sh
bootstrap_4.19.01.sh
11. thirdparty
12. Notice how you have a bootstrap file for all versions of Edge. The apigee
directory also contains separate directories for each version of Edge.
13. To package the repo into a .tar file, use the following command:
To install Edge from the local repo or .tar file, just make sure to run the correct
bootstrap file by using one of the following commands. This example installs Edge
4.17.01:
● If installing from a .tar file, run the correct bootstrap file from the repo:
/usr/bin/curl http://uName:pWord@remoteRepo:3939/bootstrap_4.17.01.sh -o
/tmp/bootstrap_4.17.01.sh
Where component is the Edge component to install, and configFile is the silent
configuration file containing the installation information. The configuration file must
be accessible or readable by the "apigee" user. For example, you can create a new
directory for the files, place them in the /usr/local or /usr/local/share directory, or
anywhere else on the node accessible by the "apigee" user.
/opt/apigee/apigee-setup/bin/setup.sh -p ms -f /usr/local/myConfig
For information on installing the Edge apigee-setup, see Install the Edge apigee-
setup utility.
Installation considerations
As you write your config file, take into consideration the following options.
By default, Edge installs all Postgres nodes in master mode. However, in production
systems with multiple Postgres nodes, you must configure them to use master-
standby replication so that if the master node fails, the standby node can continue to
server traffic.
You can enable and configure master-standby replication at install time by using
properties in the silent config file. Or, you can enable master-standby replication after
installation. For more, see Set up master-standby replication for Postgres.
Note: While you can enable authentication when you install Cassandra, you cannot change the
default username and password. You must perform that step manually after the installation of
Cassandra completes.
If you want to create a virtual host that binds the Router to a protected port, such as
port numbers less than 1000, then you have to configure the Router to run as a user
with access to those ports. By default, the Router runs as the user "apigee" which
does not have access to privileged ports.
For information about how to configure a virtual host and Router to access ports
below 1000, see Setting up a virtual host.
After you complete the initial installation, Apigee recommends that you install the
new Edge UI, which is an enhanced user interface for developers and administrators
of Apigee Edge for Private Cloud. (The Classic UI is installed by default.)
Note that the Edge UI requires that you disable Basic authentication and use an IDP
such as SAML or LDAP.
Component Description
Note: We recommend installing OpenLDAP on no more than three nodes. To install OpenLDAP
on more than one node, you need to use a 13-node clustered-installation.
Note: In the configuration file, you must specify all Cassandra nodes by IP address. Other
components can be specified by IP address or DNS name.
However, you will have to use different configuration files, or modify your
configuration file, if:
Each installation topology described below shows an example config file for that
topology. For a complete reference on the config file, see Edge Configuration File
Reference.
Warning: Creating a config file on a Windows machine and then copying it to a Linux machine
can add additional end-of-line, carriage return, or newline characters to the file that are not
compatible with all Linux utilities. This situation can also occur if you copy text from a Windows
editor and paste into a Linux window. As an alternative, you can use the Linux dos2unix utility
to clean up a config file created on Windows. Or, make sure to do all editing of config files in a
Linux editor.
You can now use the "-t" flag to make that check without having to do an install. For
example, to check the system requirements for an "aio" install without actually doing
the install, use the following command:
This command displays any errors with the system requirements to the screen.
See Installation requirements for a list of system requirements for all Edge
components.
/opt/apigee/var/log/apigee-setup/setup.log
If the user running the setup.sh utility does not have access to that directory, it
writes the log to the /tmp directory as a file named setup_username.log.
If the user does not have access to /tmp, the setup.sh utility fails.
All of the installation example shown below assume that you are installing:
All-in-one installation
1. Install all components on a single node using the command:
2. /opt/apigee/apigee-setup/bin/setup.sh -p aio -f configFile
3. Restart the Classic UI component after the installation is complete:
4. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
5. This applies to the Classic UI, not the new Edge UI whose component name is
edge-management-ui.
6. Test the installation as described at Test the install.
7. Onboard your organization as described at Onboard an organization.
Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.
# With SMTP
IP1=IP_or_DNS_name_of_Node_1
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
# Admin password must be at least 8 characters long and contain one uppercase
# letter, one lowercase letter, and one digit or special character
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1"
ZK_CLIENT_HOSTS="$IP1"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1"
# Default is postgres
PG_PWD=postgres
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
See Installation topologies for the list of Edge topologies and node numbers.
Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.
# With SMTP
IP1=IP_of_Node_1
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1"
ZK_CLIENT_HOSTS="$IP1"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1"
# Default is postgres
PG_PWD=postgres
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
5-node installation
See Installation topologies for the list of Edge topologies and node numbers.
Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.
# With SMTP
IP1=IP_of_Node_1
IP2=IP_of_Node_2
IP3=IP_of_Node_3
IP4=IP_of_Node_4
IP5=IP_of_Node_5
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1 $IP2 $IP3"
# Default is postgres
PG_PWD=postgres
PG_MASTER=$IP4
PG_STANDBY=$IP5
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
See Installation topologies for the list of Edge topologies and node numbers.
Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.
# With SMTP
IP1=IP_of_Node_1
IP2=IP_of_Node_2
IP3=IP_of_Node_3
IP8=IP_of_Node_8
IP9=IP_of_Node_9
HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
# Optionally use Cassandra racks
CASS_HOSTS="$IP1 $IP2 $IP3"
# Default is postgres
PG_PWD=postgres
SKIP_SMTP=n
PG_MASTER=$IP8
PG_STANDBY=$IP9
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
This section describes the order of installation for a 13-node cluster. For a list of
Edge topologies and node numbers, see Installation topologies.
Shown below is a sample silent configuration file for this topology. For a complete
reference on the config file, see Edge Configuration File Reference.
# For all nodes except IP4 and IP5 # For OpenLDAP nodes only (IP4
# (which are the OpenLDAP nodes) and IP5)
IP1=IP_of_Node_1 IP1=IP_of_Node_1
IP2=IP_of_Node_2 IP2=IP_of_Node_2
IP3=IP_of_Node_3 IP3=IP_of_Node_3
IP4=IP_of_Node_4 IP4=IP_of_Node_4
IP5=IP_of_Node_5 IP5=IP_of_Node_5
IP6=IP_of_Node_6 IP6=IP_of_Node_6
IP7=IP_of_Node_7 IP7=IP_of_Node_7
IP8=IP_of_Node_8 IP8=IP_of_Node_8
IP9=IP_of_Node_9 IP9=IP_of_Node_9
HOSTIP=$(hostname -i) HOSTIP=$(hostname -i)
ENABLE_SYSTEM_CHECK=y ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD APIGEE_ADMINPW=ADMIN_PASS
LICENSE_FILE=/tmp/license.txt WORD
# Management Server on IP6 only
MSIP=$IP6 # For the OpenLDAP Server on IP4
USE_LDAP_REMOTE_HOST=y only
LDAP_HOST=$IP4 MSIP=$IP6
LDAP_PORT=10389 USE_LDAP_REMOTE_HOST=n
# Management Server on IP7 only LDAP_TYPE=2
# MSIP=$IP7 LDAP_SID=1
# USE_LDAP_REMOTE_HOST=y LDAP_PEER=$IP5
# LDAP_HOST=$IP5
# LDAP_PORT=10389 # For the OpenLDAP Server on IP5
# Use the same password for both OpenLDAP only
nodes # MSIP=$IP7
APIGEE_LDAPPW=LDAP_PASSWORD # USE_LDAP_REMOTE_HOST=n
MP_POD=gateway # LDAP_TYPE=2
REGION=dc-1 # LDAP_SID=2
ZK_HOSTS="$IP1 $IP2 $IP3" # LDAP_PEER=$IP4
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3" # Set same password for both
# Must use IP addresses for CASS_HOSTS, OpenLDAPs.
not DNS names. APIGEE_LDAPPW=LDAP_PASSWO
# Optionally use Cassandra racks RD
CASS_HOSTS="$IP1 $IP2 $IP3"
# Default is postgres
PG_PWD=postgres
PG_MASTER=$IP8
PG_STANDBY=$IP9
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company
<myco@company.com>"
Before you install Edge on a 12-node clustered topology (two data centers), you must
understand how to set the ZooKeeper and Cassandra properties in the silent config
file.
● ZooKeeper
For the ZK_HOSTS property for both data centers, specify the IP addresses or
DNS names of all ZooKeeper nodes from both data centers, in the same order,
and mark any nodes with the with :observer modifier. Nodes without the
:observer modifier are called "voters". You must have an odd number of
"voters" in your configuration.
In this topology, the ZooKeeper host on host 9 is the observer:
For the ZK_CLIENT_HOSTS property for each data center, specify the IP
addresses or DNS names of only the ZooKeeper nodes in the data center, in
the same order, for all ZooKeeper nodes in the data center. In the example
configuration file shown below, node 9 is tagged with the :observer
modifier so that you have five voters: Nodes 1, 2, 3, 7, and 8.
● Cassandra
See Installation topologies for the list of Edge topologies and node numbers.
Shown below is a silent configuration file for this topology. For a complete reference
on the config file, see Edge Configuration File Reference.
● Configures OpenLDAP with replication across two OpenLDAP nodes.
● Specifies the :observer modifier on one ZooKeeper node. In a single data
center installation, omit that modifier.
# Datacenter 1
IP1=IP_of_Node_1
IP2=IP_of_Node_2
IP3=IP_of_Node_3
IP6=IP_of_Node_6
IP7=IP_of_Node_7
IP8=IP_of_Node_8
IP9=IP_of_Node_9
IP12=IP_of_Node_12
HOSTIP=$(hostname -i)
MSIP=$IP1
ENABLE_SYSTEM_CHECK=y
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=ADMIN_PASSWORD
LICENSE_FILE=/tmp/license.txt
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=2
LDAP_SID=1
LDAP_PEER=$IP7
APIGEE_LDAPPW=LDAP_PASSWORD
MP_POD=gateway-1
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8 $IP9:observer"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
# Optionally use Cassandra racks
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1 $IP7:2,1 $IP8:2,1 $IP9:2,1"
# Default is postgres
PG_PWD=postgres
PG_MASTER=$IP6
PG_STANDBY=$IP12
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=SMTP_PASSWORD
# omit for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
To remove the organization, environment and other artifacts created by the test
scripts:
Note: Do not remove the test artifacts if you intend to use SmartDocs. See Install SmartDocs for
more.
Onboard an organization
To onboard an organization, you must create an onboarding configuration file and
then pass it to the setup-org command. Each of these steps is described in the
sections that follow.
This section includes a sample configuration file for onboarding an organization with
setup-org.
Copy the following example and edit as necessary for your organization:
IP1=192.168.1.1
MSIP="$IP1"
ADMIN_EMAIL="admin@email.com"
APIGEE_ADMINPW=admin_password # If omitted, you are prompted for it.
NEW_USER="y"
USER_NAME=new@user.com
FIRST_NAME=new
LAST_NAME=user
# Org admin password must be at least 8 characters long and contain one
uppercase
USER_PWD="newUserPword"
ORG_ADMIN=new@user.com
# NEW_USER="n"
# ORG_ADMIN=existing@user.com
# Specify environment name.
VHOST_PORT=9001
VHOST_NAME=default
VHOST_ALIAS=myorg-test.apigee.net
# VHOST_ALIAS="firstRouterIP:9001 secondRouterIP:9001"
Note that:
● For VHOST_ALIAS, if you already have a DNS record that you will use to
access to the virtual host, specify the host alias and optionally the port, for
example, "myapi.example.com". If you do not yet have a DNS record, you can
use the IP address of the Router.
For more on how to configure the virtual host, see Setting up a virtual host.
● For TLS/SSL configuration, see Keystores and Truststores and Configuring
TLS access to an API for the Private Cloud for more information on creating
the JAR file, and other aspects of configuring TLS/SSL.
● For more information on configuring virtual hosts, see Configuring TLS access
to an API for the Private Cloud.
● You cannot create two organizations with the same name. In that case, the
second create will fail.
Execute setup-org
After you created the onboarding configuration file, you pass it to the setup-org
script to perform the onboarding process. You must run the script on the
Management Server node.
NOTE: The onboarding configuration file must be readable by the "apigee" user.
To execute setup-org:
On completion of onboarding, verify the status of the system by issuing the following
curl commands on the Management Server node:
2. curl -u adminEmail:admin_passwd
http://localhost:8080/v1/organizations/org_name/deployments
3. Check analytics by executing the following command:
4. curl -u adminEmail:admin_password
http://localhost:8080/v1/organizations/org_name/environments/env_name/
provisioning/axstatus
5. Check the PostgreSQL database status by executing the following commands
on Node 2 (as shown in the installation topologies):
6. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
7. At the command prompt, enter the following command to view the analytics
table for your organization:
8. \d analytics."org_name.env_name.fact"
9. Use the following command to exit psql:
10. \q
11. Access the Apigee Edge UI using a web browser. Remember that you already
noted the management console URL at the end of the installation.
a. Launch your preferred browser and enter the URL of the Edge UI. It
looks similar to the following, where the IP address is for Node 1 (as
shown in the installation topologies), or whichever node on which you
installed the UI for alternative configurations:
b. http://192.168.56.111:9000/login
c. 9000 is the port number used by the UI.
If you are starting the browser directly on the server hosting the Edge
UI, then you can use a URL in the form:
d. http://localhost:9000/login
e. Note: Ensure that port 9000 is open. If you are using a service such as Google
Cloud Engine, you may need to modify the firewall rules so that you can access
the Apigee services via a browser.
f. On the console login page, specify the Apigee system admin
username/password.Note: This is the global system administrator password
that you set during the installation.
12. Sign up for a new Apigee user account and use the new user credential to
login. On the console sign in page, click the Sign In button.
The browser redirects to
http://192.168.56.111:9000/platform/#/org_name/ and opens a
dashboard that lets you configure the organization that you created (if you
logged in using Apigee admin credentials).
After you have onboarded a new organization and verified that the onboarding
process was successful, you can now create your first proxy. For more information,
see Build your first API proxy.
● Samples list
● Mock target RESTful APIs that you can use in your own API-building
experiments
●
NOTE: The definition of the IP# variables for the Router, Message Processor, Qpid, and Postgres
nodes are for illustrating the node configuration; they are not actually used.
# Specify "y" to check that the system meets the CPU and memory requirements
# for the component being installed. See Installation Requirements for requirements
# for each component. The default value is "n" to disable check.
ENABLE_SYSTEM_CHECK=n
#
# OpenLDAP information.
#
# Set to y if you are connecting to a remote LDAP server.
# If n, Edge installs OpenLDAP when it installs the Management Server.
USE_LDAP_REMOTE_HOST=n
# If connecting to remote OpenLDAP server, specify the IP/DNS name and port.
# LDAP_HOST=$IP1 # IP or DNS name of OpenLDAP node.
# LDAP_PORT=10389 # Default is 10389.
APIGEE_LDAPPW=yourLdapPassword
# If you are using region names other than dc-1, dc-2 etc, set this property to map
your region
# name to the appropriate dc-x format region name. This property is required by
Management server
# to appropriately register Cassandra data stores based on Cassandra's data centers
and regions.
REGION_MAPPING=":dc-1 :dc-2 ... :dc-x"
# ZooKeeper information.
# See table below if installing in a multi-data center environment.
ZK_HOSTS="$IP1 $IP2 $IP3" # IP/DNS names of all ZooKeeper nodes.
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3" # IP/DNS names of all ZooKeeper nodes.
# Cassandra information.
CASS_CLUSTERNAME=Apigee # Default name is Apigee.
# SMTP information.
SKIP_SMTP=n # Skip now and configure later by specifying "y".
SMTPHOST=smtp.gmail.com
SMTPUSER=your@email.com
SMTPPASSWORD=yourEmailPassword
SMTPSSL=y
SMTPPORT=465 # If no SSL, use a different port, such as 25.
SMTPMAILFROM="My Company <myco@company.com>"
# The following four properties are only effective for Management server:
# Cassandra JMX uname/pword required if you enabled Cassandra JMX
authentication.
# CASS_JMX_USERNAME =
# CASS_JMX_PASSWORD =
# Cassandra JMX SSL truststore details if you have enabled SSL based JMX in
Cassandra.
# JMX Truststore file should be readable by Apigee user
# CASS_JMX_TRUSTSTORE =
# CASS_JMX_TRUSTSTORE_PASS =
Property Note
ENABLE_SYSTEM_C If "y", check that the system meets the CPU and memory
HECK requirements for the component being installed. See
Installation Requirements for requirements for each
component.
REGION_MAPPING If you are using region names other than dc-1, dc-2 etc,
set this property to map your region name to the
appropriate dc-x format region name. This property is
required by Management server to appropriately register
Cassandra data stores based on Cassandra's data
centers and regions.
IP_address[:data_center_number,rack_number]
PG_MASTER=IPofNewMaster
PG_STANDBY=IPofOldMaster
NOTE: Apigee strongly recommends that you use IP addresses
rather than host names for the PG_MASTER and PG_STANDBY
properties in your silent configuration file. In addition, you should
be consistent on both nodes.
In addition to the properties listed here, there are properties for configuring Apigee
mTLS. For more information, see Configure Apigee mTLS.
Install SmartDocs
SmartDocs is installed automatically when you install and run the installation test
scripts described in Test the install. As part of running the test scripts, you run the
following commands:
Where configFile is the same config file you used to install Edge. See Install Edge
components on a node for more.
Note: If you do not add and configure this KVM, the proxy will not enforce
whitelisting. This could result in unauthorized access to your hosts and IP
addresses. Only hostnames and IP addresses of API endpoints documented
with SmartDocs should be included in the "allowed_hosts" values.
9.
Monetization requirements
● If you are installing Monetization on an Edge topology that uses multiple
Management Server nodes, such as a 13-node installation, then you must
install both Edge Management Server nodes before installing Monetization.
● To install Monetization on Edge where the Edge installation has multiple
Postgres nodes, the Postgres nodes must be configured in Master/Standby
mode. You cannot install Monetization on Edge if you have multiple Postgres
master nodes. For more, see Set up Master-Standby Replication for Postgres.
● Monetization is not available with the All-In-One (AIO) configuration.
Installation overview
The following steps illustrate how to add Monetization Services on an existing
Apigee Edge installation:
● Use the apigee-setup utility to update the Apigee Management Server node
to enable the Monetization Services, for example, catalog management, limits
and notifications configuration, billing and reporting.
If you have multiple Management Server nodes, such as a 13-node
installation, then you must install both Edge Management Server nodes before
installing Monetization.
● Use the apigee-setup utility to update the Apigee Message Processor to
enable the runtime components of the Monetization Services, for example,
transaction recording policy and limit enforcement. If you have multiple
Message Processors, install Monetization on all of them.
● Perform the Monetization onboarding process for your Edge organizations.
● Configure the Apigee Developer Services portal (or simply, the portal) to
support monetization. For more information, see Configure Monetization in
the Developer Portal.
Note: Typically, you add these properties to the same configuration file that you used to install
Edge, as shown in Install Edge components on a node.
#
# Monetization configuration properties.
#
# Postgres credentials from Edge installation.
PG_USER=apigee # Default from Edge installation
PG_PWD=postgres # Default from Edge installation
# If your Edge config file did not specify SMTP information, add it.
# Monetization requires an SMTP server.
SMTPHOST=smtp.gmail.com
SMTPPORT=465
SMTPUSER=your@email.com
SMTPPASSWORD=yourEmailPassword
SMTPSSL=y
SMTPMAILFROM="My Company <myco@company.com>"
Notes:
● If your Edge config file did not specify SMTP information, add it. Monetization
requires an SMTP server.
● In a single data center installation, odd numbers of ZooKeeper nodes should
be configured as voters. If a number of ZooKeeper nodes are even, then
some nodes will be configured as observers. When you are installing Edge
across an even number of data centers, some ZooKeeper nodes must be
configured as observers to make the number of voter nodes odd. During
ZooKeeper leader election one voter node will be elected as a leader. Ensure
that the ZK_HOSTS property above specifies a leader node in a multiple data
center installation.
● If you enable Cassandra authentication, you can pass the Cassandra user
name and password by using the following properties:
CASS_USERNAME
● CASS_PASSWORD
3. /opt/apigee/apigee-setup/bin/setup.sh -p mo -f configFile
4. The -p mo option specifies to integrate Monetization.
The configuration file must be accessible or readable by the "apigee" user.
5. If you are installing Monetization on multiple Management Server nodes,
repeat step 2 on the second Management Server node.
1. On the first Message Processor node, at the command prompt, run the setup
script:
2. /opt/apigee/apigee-setup/bin/setup.sh -p mo -f configFile
3. The -p mo option specifies to integrate Monetization.
The configuration file must be accessible or readable by the "apigee" user.
4. Repeat this procedure on all Message Processor nodes.
Monetization onboarding
To create a new organization and enable monetization:
1. Create the organization as you would any new organization. For more
information, see Onboard an organization.
2. Use the monetization provisioning API as described in Enable monetization
for an organization. To do this, you must have system administrator
privileges.
When you next log in to the Edge UI, you see the Monetization entry in the top-level
menu for the organization:
Additional configuration
Provide billing documents as PDF files
To add/update organization attributes, you can use a PUT request, as the following
example shows:
curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \
{
...
"displayName": "Orgnization name",
"name": "org4",
"properties": {
"property": [
...
{
"name": "MINT_CURRENCY",
"value": "USD"
},
{
"name": "MINT_COUNTRY",
"value": "US"
},
{
"name": "MINT_TIMEZONE",
"value": "GMT"
}
]
}
}
The following table lists the organization-level attributes that are available to
configure a mint organization.
Attributes Description
For configuring business organization settings using the management UI, see
Access monetization in Edge.
Monetization limits
To enforce monetization limits, attach the Monetization Limits Check policy to API
proxies. Specifically, the policy is triggered under the following conditions:
The Monetization Limits Check policy raises faults and blocks API calls in situations
like those listed above. The policy extends the Raise Fault policy, and you can
customize the message returned. The applicable conditions are derived from
business variables.
For Apigee Edge Cloud customers, Apigee will assist you in enabling monetization
for your organization. Contact Apigee Edge Support for assistance.
Note: Ensure that your Edge account has system administrator privileges before
proceeding.
Property Description
For example, the following request enables monetization for the myOrg organization,
where ms_IP is the IP address of the Management Server node and port is the
configured port (such as 8443):
'{
"orgName" : "myOrg",
"mxGroup" : "mxgroup001",
"pgHostName" : "pg_hostname",
"pgPort" : "5432",
"pgUserName" : "pg_username",
"pgPassword" : "pg_password",
"adminEmail" : "myemail@company.com",
"notifyTo" : "myemail@company.com"
}' \
"https://ms_IP:port/v1/mint/asyncjobs/enablemonetization" \
-u email:password
"id": "c6eaa22d-27bd-46cc-be6f-4f77270818cf",
"log": "",
"orgId": "myOrg",
"status": "RUNNING",
"type": "ENABLE_MINT"
After the request is complete, an email is sent to the email configured for the
notifyTo property in the request, and the status field will change to one of the
following values: COMPLETED, FAILED, or CANCELLED.
You can check the status of the request by issuing a GET to /asyncjobs/{id}.
For example:
-u email:password
"id": "c6eaa22d-27bd-46cc-be6f-4f77270818cf",
"log": "",
"orgId": "myOrg",
"status": "COMPLETED",
"type": "ENABLE_MINT"
}
whatshot Tune JVM heap Optimize your Java memory settings for
settings each node.
Install apigee-monit on the Install and use a tool that monitors the
node components on the node and attempts to
restart them if they fail.
Install the new Edge UI Apigee recommends that you install the
new Edge UI, which is an enhanced user
interface for developers and
administrators of Apigee Edge for Private
Cloud.
Note that these are just some of the more common tasks you typically perform after
installing Edge. For additional operations and administration tasks, see How to
configure Edge and Operations.
/opt/apigee/apigee-service/bin/apigee-all stop|start|restart|status|version
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
For example, to restart the Edge Router, execute the following command:
The complete list of actions for a component depends on the component itself, but
all components support the following actions:
NOTE: If you have not set any properties for a component, the
/opt/apigee/customer/application directory might not contain a .properties file for
the component. In that case, create one.
To set a property for a component, edit the corresponding .properties file, and
then restart the component:
For example:
dn: olcDatabase={2}bdb,cn=config
changetype: modify
replace: olcReadOnly
2. olcReadOnly: TRUE
3. Execute the following command on the server to mark it readonly:
4. ldapmodify -a -x -w "$APIGEE_LDAPPW" -D "$CONFIG_BIND_DN" -H
"ldap://:10389" -f mark_readonly.ldif
In case the primary server fails, you can switch back to use standby server as
primary as follow:
dn: olcDatabase={2}bdb,cn=config
changetype: modify
replace: olcReadOnly
2. olcReadOnly: FALSE
3. Execute the following command on the standby server:
4. ldapmodify -a -x -w "$APIGEE_LDAPPW" -D "$CONFIG_BIND_DN" -H
"ldap://:10389" -f mark_writable.ldif
This section provides the default and recommended Java heap memory sizes, as
well as the process for changing the defaults. Lastly, this section describes how to
change other JVM settings using properties files.
Runtime
Analytics
Management
Notes
1
Cassandra dynamically calculates the maximum heap size when it starts up.
Currently, this is one half the total system memory, with a maximum of 8192MB.
For information on setting the heap size, see Change heap memory size.
2
For Message Processors, Apigee recommends that you set the heap size to
between 3GB and 6GB. Increase the heap size beyond 6GB only after conducting
performance tests.
If the heap usage nears the max limit during your performance testing, then
increase the max limit. For information on setting the heap size, see Change heap
memory size.
3
Not all Private Cloud components are implemented in Java. Because they are not
Java-based, apps running natively on the host platform do not have configurable
Java heap sizes; instead, they rely on the host system for memory management.
To determine how much total memory Apigee recommends that you allocate to your
Java-based components on a node, add the values listed above for each component
on that node. For example, if your node hosts both the Postgres and Qpid servers,
Apigee recommends that your combined memory allocation be between 2.5GB and
4.5GB.
TIP: In Java 1.8 and later, class metadata memory allocation replaced permanent generation
("permgem"). Some configuration properties still refer to permgen (often reflecting "perm" in the
property name).
The following table lists the properties that you edit to change heap sizes:
Property Description
bin_sete The default class metadata size. The default value is set
nv_meta_ to the value of bin_setenv_max_permsize, which
defaults to 128 MB. On the Message Processor, Apigee
space_si
recommends that you set this value to 256 MB or 512 MB,
ze depending on your traffic.
When you set heap size properties on a node, use the "m" suffix to indicate
megabytes, as the following example shows:
bin_setenv_min_mem=4500m
bin_setenv_max_mem=4500m
bin_setenv_meta_space_size=1024m
After setting the values in the properties file, restart the component:
For example:
For additional tips on configuring memory for Private Cloud components, see this
article on the Edge forums.
This section describes how to configure the delivered default LDAP password policy.
Use this password policy to configure various password authentication options, such
as the number of consecutive failed login attempts after which a password can no
longer be used to authenticate a user to the directory.
This section also describes how to use a couple of APIs to unlock user accounts that
have been locked according to attributes configured in the default password policy.
● LDAP policy
● "Important information about your password policy" in the Apigee Community
1. Connect to your LDAP server using an LDAP client, such as Apache Studio or
ldapmodify. By default OpenLDAP server listens on port 10389 on the
OpenLDAP node.
To connect, specify the Bind DN or user of
cn=manager,dc=apigee,dc=com and the OpenLDAP password that you
set at the time of Edge installation.
2. Use the client to navigate to the password policy attributes for:
● Edge users: cn=default,ou=pwpolicies,dc=apigee,dc=com
● Edge sysadmin:
cn=sysadmin,ou=pwpolicies,dc=apigee,dc=com
3. Edit the password policy attribute values as desired.
4. Save the configuration.
If
pwdFailureCountInterval
is set to 0, only a successful
authentication can reset the
counter.
If
pwdFailureCountInterval
is set to >0, the attribute
defines a duration after which
the count of consecutive failed
login attempts is automatically
reset, even if no successful
authentication has occurred.
If pwdLockoutDuration is
set to 0, the user account will
remain locked until a system
administrator unlocks it.
See Unlocking a user account.
If pwdLockoutDuration is
set to >0, the attribute defines a
duration for which the user
account will remain locked.
When this time period has
elapsed, the user account will
be automatically unlocked.
To unlock a user:
cd /opt/apigee/edge-management-server
grep -ri "conf_security_rbac.restricted.resources" *
token/default.properties:conf_security_rbac.restricted.resources=/environments,/
environments/*,/environments/*/virtualhosts,/environments/*/virtualhosts/*,/pods,/
environments/*/servers,/rebuildindex,/users/*/status
NOTE: apigee-monit is not supported on all platforms. Before continuing, check the list of
compatible platforms.
To use apigee-monit, you must install it manually. It is not part of the standard
installation.
WARNING
apigee-monit attempts to restart a component every time the component fails a check. A
component fails a check when it is disabled, but also when you purposefully stop it, such as
during an upgrade, installation, or any task that requires you to stop a component.
As a result, you must stop monitoring the component before executing any procedure that
requires you to install, upgrade, stop, or restart a component.
Quick start
This section shows you how to quickly get up and running with apigee-monit.
If you are using Amazon Linux, first install Fedora. Otherwise, skip this step.
sudo yum install -y
https://kojipkgs.fedoraproject.org/packages/monit/5.25.1/1.el6/x86_64/monit-
5.25.1-1.el6.x86_64.rpm
Install apigee-monit
cat /opt/apigee/var/log/apigee-monit/apigee-monit.log
Each of these topics and others are described in detail in the sections that follow.
About apigee-monit
apigee-monit helps ensure that all components on a node stay up and running. It
does this by providing a variety of services, including:
apigee-monit architecture
During your Apigee Edge for Private Cloud installation and configuration, you
optionally install a separate instance of apigee-monit on each node in your
cluster. These separate apigee-monit instances operate independently of one
another: they do not communicate the status of their components to the other
nodes, nor do they communicate failures of the monitoring utility itself to any central
service.
Supported platforms
apigee-monit supports the following platforms for your Private Cloud cluster. (The
supported OS for apigee-monit depends on the version of Private Cloud.)
CentOS 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
7.7 7.9
RedHat Enterprise 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
Linux (RHEL) 7.7 7.9
Oracle Linux 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8
7.7
* While not technically supported, you can install and use apigee-monit on CentOS/RHEL/Oracle version 6.9 for Apigee Edge for
Component configurations
apigee-monit uses component configurations to determine which components to
monitor, which aspects of the component to check, and what action to take in the
event of a failure.
TERMINOLOGY
A component configuration specifies the service to check and the action to take in the event of a
failure. Component configurations can also define how often to check a service and the number
of times to retry a service before triggering a failure response.
By default, apigee-monit monitors all Edge components on a node using their pre-
defined component configurations. To view the default settings, you can look at the
apigee-monit component configuration files. You cannot change the default
component configurations.
Configuration
Component What is monitored
location
Postgres
Qpid
Zookeeper
The following example shows the default component configuration for the edge-
router component:
then restart
then restart
The following example shows the default configuration for the Classic UI (edge-ui)
component:
This applies to the Classic UI, not the new Edge UI whose component name is edge-
management-ui.
You cannot change the default component configurations for any Apigee Edge for
Private Cloud component. You can, however, add your own component
configurations for external services, such as your target endpoint or the httpd
service. For more information, see Non-Apigee component configurations.
Install apigee-monit
apigee-monit is not installed by default; you can install it manually after upgrading
or installing version 4.19.01 or later of Apigee Edge for Private Cloud.
When a service stops for any reason, apigee-monit attempts to restart the
service.
This can cause a problem if you want to purposefully stop a component. For
example, you might want to stop a component when you need to back it up or
upgrade it. If apigee-monit restarts the service during the backup or upgrade, your
maintenance procedure can be disrupted, possibly causing it to fail.
TIP
To prevent apigee-monit from causing failures during a planned outage, stop monitoring the
component with the stop-component option when you perform the following operations:
● Stop a component
● Install or upgrade a component
● Restart a component
The following sections show the options for stopping the monitoring of components.
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
Note that "all" is not a valid option for stop-component. You can stop and
unmonitor only one component at a time with stop-component.
To re-start the component and resume monitoring, execute the following command:
For instructions on how to stop and unmonitor all components, see Stop all
components and unmonitor them.
To unmonitor a component (but don't stop it), execute the following command:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
To unmonitor all components (but don't stop them), execute the following command:
To stop all components and unmonitor them, execute the following commands:
/opt/apigee/apigee-service/bin/apigee-all stop
/opt/apigee/apigee-service/bin/apigee-all start
Stop apigee-monit
NOTEIf you use cron to watch apigee-monit, you must disable it before stopping apigee-
monit, as described in Monitor apigee-monit.
Start apigee-monit
Disable apigee-monit
You can suspend monitoring all components on the node by using the following
command:
Uninstall apigee-monit
To uninstall apigee-monit:
1. If you set up a cron job to monitor apigee-monit, remove the cron job
before uninstalling apigee-monit:
2. sudo rm /etc/cron.d/apigee-monit.cron
3. Stop apigee-monit with the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-monit stop
5. Uninstall apigee-monit with the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-monit uninstall
7. Repeat this procedure on each node in your cluster.
Customize apigee-monit
You can customize the default apigee-monit control settings such as the
frequency of status checks and the locations of the apigee-monit files. You do
this by editing a properties file using the code with config technique. Properties files
will persist even after you upgrade Apigee Edge for Private Cloud.
The following table describes the default apigee-monit control settings that you
can customize:
Property Description
conf_mo The httpd daemon's port. apigee-monit uses httpd for its
nit_htt dashboard app and to enable reports/summaries. The default value is
pd_port 2812.
conf_mo The amount of time that apigee-monit waits after it is first loaded
nit_mon into memory before it runs. This affects apigee-monit the first
it_dela process check only.
y_time
conf_mo The location of the PID and state files, which apigee-monit uses for
nit_mon checking processes.
it_rund
ir
You can define global configuration settings for apigee-monit; for example, you
can add email notifications for alerts. You do this by creating a configuration file in
the /opt/apigee/data/apigee-monit directory and then restarting apigee-
monit.
SET MAIL-FORMAT {
from: edge-alerts@example.com
}
SET ALERT fred@example.com
You can add your own configurations to apigee-monit so that it will check
services that are not part of Apigee Edge for Private Cloud. For example, you can use
apigee-monit to check that your APIs are running by sending requests to your
target endpoint.
Note that this is for non-Edge components only. You cannot customize the
component configurations for Edge components.
/opt/apigee/var/log/apigee-monit/apigee-monit.log
You can change the default location by customizing the apigee-monit control
settings.
You cannot customize the format of the apigee-monit log file entries.
apigee-monit includes the following commands that give you aggregated status
information about the components on a node:
Command Usage
Each of these commands is explained in more detail in the sections that follow.
report
The report command gives you a rolled-up summary of how many components are
up, down, currently being initialized, or are currently unmonitored on a node. The
following example invokes the report command:
up: 11 (100.0%)
down: 0 (0.0%)
initialising: 0 (0.0%)
unmonitored: 1 (8.3%)
total: 12 services
You may get a Connection refused error when you first execute the report
command. In this case, wait for the duration of the
conf_monit_monit_delay_time property, and then try again.
summary
The summary command lists each component and provides its status. The following
example invokes the summary command:
host_name OK System
apigee-zookeeper OK Process
apigee-cassandra OK Process
apigee-openldap OK Process
apigee-qpidd OK Process
apigee-postgresql OK Process
edge-ui OK Process
If you get a Connection refused error when you first execute the summary
command, try waiting the duration of the conf_monit_monit_delay_time
property, and then try again.
Monitor apigee-monit
Apigee recommends that you issue this command periodically on each node that is
running apigee-monit. One way to do this is with a utility such as cron that
executes scheduled tasks at pre-defined intervals.
To use cron to monitor apigee-monit:
If you want to stop or temporarily disable apigee-monit, you must disable this
cron job, too, otherwise cron will restart apigee-monit.
NOTE: Postgres autovacuum is enabled by default, so you do not have to explicitly run the
VACUUM command. Autovacuum reclaims storage by removing obsolete data from the
PostgreSQL database.NOTE: Apigee does not recommend moving the PostgreSQL database
without contacting Apigee Customer Success. The Apigee system uses the PostgreSQL
database server's IP address. As a result, moving the database or changing its IP address
without performing corresponding updates on the Apigee environment metadata will cause
undesirable results.
NOTE: No pruning is required for Cassandra. Expired tokens are automatically purged after 180
days.
Childfactables are daily-partitioned fact data. Every day new partitions are created
and data gets ingested into the daily partitioned tables. So at a later point in time,
when the old fact data will not be required, you can purge the respective
childfactables.
● Delete-from-parent-fact Default : No. Will also delete data older than retention
days from parent fact table.
● Confirm-delete-from-parent-fact. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.
NOTE: If you have configured master-standby replication between Postgres servers, then you
only need to run this script on the Postgres master.
Anti-entropy maintenance
The Apache Cassandra ring nodes require periodic maintenance to ensure
consistency across all nodes. To perform this maintenance, use the following
command:
● Ref: https://support.datastax.com/hc/en-us/articles/204226329-How-to-
check-if-a-scheduled-nodetool-repair-ran-successfully
● Run during periods of relatively low workload (the tool imposes a significant
load on the system).
● Run at least every seven days in order to eliminate problems related to
Cassandra's "forgotten deletes".
● Run on different nodes on different days, or schedule it so that there are
several hours between running it on each node.
● Use the -pr option (partitioner range) to specify the primary partitioner range
of the node only.
If you enabled JMX authentication for Cassandra, you must include the username
and password when you invoke nodetool. For example:
You can also run the following command to check the supported options of
apigee_repair:
● Forgotten deletes
● Data consistency
● nodetool repair
NOTE: Apigee does not recommend adding, moving or removing Cassandra nodes without
contacting Apigee Edge Support. The Apigee system tracks Cassandra nodes using their IP
address, and performing ring maintenance without performing corresponding updates on the
Apigee environment metadata will cause undesirable results.
If you should find that Cassandra log files are taking up excessive space, you can
modify the amount of space allocated for log files by editing the log4j settings.
1. Edit /opt/apigee/customer/application/cassandra.properties
to set the following properties. If that file does not exist, create it:
conf_logback_maxfilesize=20MB
Cassandra automatically performs the following operations to reduce its own disk
utilization:
/opt/apigee/apigee-service/bin/apigee-all enable_autostart
/opt/apigee/apigee-service/bin/apigee-all disable_autostart
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
The script only affects the node on which you run it. If you want to configure all
nodes for autostart, run the script on all nodes.
Note: If you run into issues where the OpenLDAP server does not start automatically on reboot,
try disabling SELinux or setting it to permissive mode.
Note that the order of starting the components is very important:
Implications:
Troubleshooting autostart
If you configure autostart, and Edge encounters issues with starting the OpenLDAP
server, you can try disabling SELinux or setting it to permissive mode on all nodes.
To configure SELinux:
● Install the Edge UI on its own node. You cannot install it on a node that
contains other Edge components, including the node on which the existing
Classic UI resides. Doing so will cause users to be unable to log in to the
Classic UI.
The Edge UI's node must meet the following requirements:
● JAVA 1.8
● 4 GBytes of RAM
● 2-core
● 60GB disk space
● You must first install the 4.51.00 version of the apigee-setup utility
on the node as described at Install the Edge apigee-setup utility.
● Port 3001 must be open. This is the default port used for requests to
the Edge UI. If you change the port by using the properties described
in New UE configuration file, make sure that port is open.
● Enable an external IDP on Edge. The Edge UI supports SAML or LDAP as the
authentication mechanism.
● (SAML IDP only) The Edge UIonly supports TLS v1.2. Because you connect
to the SAML IDP over TLS, if you use SAML, your IDP must therefore support
TLS v1.2.
For more on the Edge UI, see The new Edge UI for Private Cloud.
Installation overview
At a high level, the process for installing the Edge UI for Apigee Edge for Private
Cloud is as follows:
● Installs the base Classic UI, called shoehorn, and configures the Classic UI to
use an external IDP to authenticate with Edge.NOTE: You cannot use the installed
Classic UI to connect to Edge. It is simply a requirement for installing the new Edge UI.
● Installs the new Edge UI, and configures the Edge UI to use your external IDP
to authenticate with Edge.
The Classic UI, the default UI you installed with Apigee Edge for Private Cloud, does
not require an external IDP. It can use either an IDP or Basic authentication. That
means you can either:
● Enable external IDP support on Edge and on both the Classic UI and the Edge
UI.
In this scenario, all Classic UI and Edge UI users are registered in the IDP.
For information on adding new users to the IDP, see Register new Edge
users.
● Enable external IDP support on Edge, but leave Basic authentication enabled.
The Edge UI uses the IDP and the Classic UI still uses Basic authentication.
In this scenario, all Classic UI users log in with Basic authentication
credentials, where their credentials are stored in the Edge OpenLDAP
database. Edge UI users are registered in the IDP and log in by using either
SAML or LDAP.
However, a Classic UI user cannot log in to the Edge UI until you have added
that user to the IDP as described in Register new Edge users.
#
# Edge UI configuration.
#
MANAGEMENT_UI_SSO_CLIENT_ID=newueclient
MANAGEMENT_UI_SSO_CLIENT_SECRET=your_client_sso_secret
#
# Shoehorn UI configuration
#
# Set to http even if you enable TLS on the Edge UI.
SHOEHORN_SCHEME=http
SHOEHORN_IP=$MANAGEMENT_UI_IP
SHOEHORN_PORT=9000
#
# Edge Classic UI configuration.
# Some settings are for the Classic UI, but are still required to configure the Edge
UI.
#
# Enable SSO
EDGEUI_SSO_ENABLED=y
#
## SSO Configuration (define external IDP) #
#
# Use one of the following configuration blocks to #
# define your IDP settings: #
# - SAML configuration properties #
# - LDAP Direct Binding configuration properties #
# - LDAP Indirect Binding configuration properties #
Apigee refers to the technique of editing .properties files as code with config
(sometimes abbreviated as CwC). Essentially, code with config is a key/value
lookup tool based on settings in .properties files. In code with config, the keys
are referred to as tokens. Therefore, to configure Edge, you set tokens in
.properties files.
Warning: The Edge documentation contains information on how to set tokens for the different
Edge components. However, many tokens are used internally and are not meant to be updated.
Do not modify any undocumented token without first contacting Apigee Edge Support to ensure
that you do not cause any unforeseen side effects.
Code with config allows Edge components to set default values that are shipped
with the product, lets the installation team override those settings based on the
installation topology, and then lets customers override any properties they choose.
If you think of it as a hierarchy, then the settings are arranged as follows, with
customer settings having the highest priority to override any settings from the
installer team or Apigee:
1. Customer
2. Installer
3. Component
Where component_name is the name of the component, and token is the token to
inspect.
If the token's value starts with a #, it has been commented out and you must use
special syntax to change it. For more information, see Set a token that is currently
commented out.
If you do not know the entire name of the token, use a tool such as grep to search
by the property name or key word. For more information, see Locate a token.
Properties files
There are editable and non-editable component configuration files. This section
describes these files.
The following table lists the Apigee components and the properties files that you
can edit to configure those components:
Editable Configuration
Component Component Name
File
If you want to set a property in one of these component configuration files but it
does not exist, you can create it in the location listed above.
In addition, you must ensure that the properties file is owned by the "apigee" user:
chown apigee:apigee
/opt/apigee/customer/application/configuration_file.properties
Component /opt/apigee/component_name/conf
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management
Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message
Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
Note: If you have not set any properties for a component, the
/opt/apigee/customer/application directory might not contain a .properties file for
the component. In that case, create one.
1. Create a new text file in an editor. The file name must match the list shown
in the table above for customer files.
2. Change the owner of the file to "apigee:apigee", as the following example
shows:
3. chown apigee:apigee /opt/apigee/customer/application/router.properties
4. If you changed the user that runs the Edge service from the "apigee" user,
use chown to change the ownership to the user which is running the Edge
service.
Locate a token
In most cases, the tokens you need to set are identified in this guide. However, if
you need to override the value of an existing token whose full name or location you
are unsure of, use grep to search the component's source directory.
For example, if you know that in a previous release of Edge you set the
session.maxAge property and want to know the token value used to set it, then
grep for the property in the /opt/apigee/edge-ui/source directory:
/opt/apigee/component_name/source/conf/
application.conf:property_name={T}token_name{/T}
The following example shows the value of the UI's session.maxAge token:
/opt/apigee/edge-ui/source/conf/
application.conf:session.maxAge={T}conf_application_session.maxage{/T}
The string between the {T}{/T} tags is the name of the token that you can set in the
UI's .properties file.
To set the value of a token that is commented out in an Edge configuration file, use
special syntax in the following form:
conf/filename+propertyName=propertyValue
The grep command returns results that include the token name. Notice how the
property name is commented out, as indicated by the # prefix:
source/conf/
http.properties:#HTTPClient.proxy.host={T}conf_http_HTTPClient.proxy.host{/T}
token/default.properties:conf_http_HTTPClient.proxy.host=
conf/http.properties:#HTTPClient.proxy.host=
conf/http.properties+HTTPClient.proxy.host=myhost.name.com
In this case, you must prefix the property name with conf/http.properties+.
This is the location and name of the configuration file containing the property
followed by "+".
After you restart the Message Processor, examine the file /opt/apigee/edge-
message-processor/conf/http.properties:
cat /opt/apigee/edge-message-processor/conf/http.properties
At the end of the file, you will see the property set, in the form:
conf/http.properties:HTTPClient.proxy.host=myhost.name.com
conf_application_http.proxyhost=proxy.example.com
conf_application_http.proxyport=8080
conf_application_http.proxyuser=apigee
3. conf_application_http.proxypassword=Apigee123!
4. Save and restart the classic UI.
About planets, regions, pods,
organizations, environments and
virtual hosts
About Planets
A planet represents an entire Edge hardware and software environment and can
contain one or more regions. In Edge, a planet is a logical grouping of regions: you
do not explicitly create or configure a planet as part of installing Edge.
About Regions
A region is a grouping of one or more pods. By default, when you install Edge, the
installer creates a single region named "dc-1" containing three pods, as the
following table shows:
In a more complex installation, you can create two or more regions. One reason to
create multiple regions is to organize machines geographically, which minimizes
network transit time. In this scenario, you host API endpoints so that they are
geographically close to the consumers of those APIs.
In Edge, each region is referred to as a data center. A data center in the Eastern US
can then handle requests arriving from Boston, Massachusetts, while a data center
in Singapore can handle requests originating from devices or computers in Asia.
For example, the following image shows two regions, corresponding to two data
centers:
About Pods
A pod is a grouping of one or more Edge components and Cassandra datastores.
The Edge components can be installed on the same node, but are more commonly
installed on different nodes. A Cassandra datastore is a data repository used by the
Edge components in the pod.
By default, when you install Edge, the installer creates three pods and associates
the following Edge components and Cassandra datastores with each pod:
The Edge components and Cassandra datastores in the "gateway" pod are required
for API processing. These components and datastores must be up and running to
process API requests. The components and datastores in the "central" and
"analytics" pods are not required to process APIs, but add additional functionality to
Edge.
Notice that the "gateway" pod contains the Edge Router and Message Processor
components. Routers only send requests to Message Processors in the same pod
and not to Message Processors in other pods.
You can use the following API call to view server registration details at the end of
the installation for each pod. This is a useful monitoring tool.
Where ms_IP is the IP address or DNS name of the Management Server, and
podName is one of the following:
● gateway
● central
● analytics
[{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1101"
}, ... ]
},
"type" : [ "message-processor" ],
"uUID" : "276bc250-7dd0-46a5-a583-fd11eba786f8"
},
{
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [ "dc-datastore", "management-server", "cache-datastore", "keyvaluemap-
datastore", "counter-datastore", "kms-datastore" ],
"uUID" : "13cee956-d3a7-4577-8f0f-1694564179e4"
},
{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1100"
}, ... ]
},
"type" : [ "router" ],
"uUID" : "de8a0200-e405-43a3-a5f9-eabafdd990e2"
}]
The type attribute lists the component type. Note that it lists the Cassandra
datastores registered in the pod. While Cassandra nodes are installed in the
"gateway" pod, you will see Cassandra datastores registered with all pods.
About Organizations
An organization is a container for all the objects in an Apigee account, including
APIs, API products, apps, and developers. An organization is associated with one or
more pods, where each pod must contain one or more Message Processors.
Note: In the default installation, where you only have a single "gateway" pod that contains all
the Message Processors, you only associate an org with the "gateway" pod. Unless you define
multiple data centers, and multiple Message Processors pods, you cannot associate an
organization with multiple pods.
1. A user who functions as the organization administrator. That user can then
add additional users to the organization, and set the role of each user.
2. The "gateway" pod, the pod containing the Message Processors.
Organization provides scope for some Apigee capabilities. For example, key-value-
map (KVM) data is available at the organization level, meaning from all
environments. Other capabilities, such as caching, are scoped to a specific
environment. Apigee analytics data is partitioned by a combination of organization
and environment.
Shown below are the major objects of an organization, including those defined
globally in the organization, and those defined specifically to an environment:
About Environments
An environment is a runtime execution context for the API proxies in an
organization. You must deploy an API proxy to an environment before it can be
accessed. You can deploy an API proxy to a single environment or to multiple
environments.
An organization can contain multiple environments. For example, you might define
a "dev", "test", and "prod" environment in an organization.
When you create an environment, you associate it with one or more Message
Processors. You can think of an environment as a named set of Message
Processors on which API proxies run. Every environment can be associated with
the same Message Processors, or with different ones.
The Message Processors assigned to an environment can all be from the same
pod, or can be from multiple pods, spanning multiple regions and data centers. For
example, you define the environment "global" in your organization that includes
Message Processors from three regions, meaning three different data centers: US,
Japan, and Germany.
Deploying an API proxy to the "global" environment causes the API proxy to run on
Message Processors in all of three data centers. API traffic arriving at a Router in
any one of those data centers would be directed only to Message Processors in
that data center because Routers only direct traffic to Message Processors in the
same pod.
Ensure that the port number specified by the virtual host is open on the Router
node. You can then access an API proxy by making a request to the following URLs:
http://routerIP:port/proxy-base-path/resource-name
https://routerIP:port/proxy-base-path/resource-name
Where:
Typically, you do not publish your APIs to customers with an IP address and port
number. Instead, you define a DNS entry for the Router and port. For example:
http://myAPI.myCo.com/proxy-base-path/resource-name
https://myAPI.myCo.com/proxy-base-path/resource-name
You also must create a host alias for the virtual host that matches the domain name
of the DNS entry. From the example above, you would specify a host alias of
myAPI.myCo.com. If you do not have a DNS entry, set the host alias to the IP
address of the Router and port of the virtual host, as routerIP:port.
For more, see About virtual hosts.
This command takes as input a config file that defines a user, organization,
environment, and virtual host.
After running that script, you can access your APIs by using a URL in the form:
http://routerIP:9001/proxy-base-path/resource-name
You can later add any number of organizations, environments, and virtual hosts.
About apigee-adminapi.sh
Advantages
Invoke apigee-adminapi.sh
You invoke apigee-adminapi.sh from a Management Server node. When you
invoke the utility, you must define the following as either environment variables or
command line options:
export ADMIN_EMAIL=user@example.com
export ADMIN_PASSWORD=abcd1234
export EDGE_SERVER=192.168.56.101
/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh servers list
export ORG=testOrg
● /opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs
NOTE: You typically use environment variables to set ADMIN_EMAIL and EDGE_SERVER, and
optionally ADMIN_PASSWORD. These parameters are used by most commands. The concern
with setting other params, such as ORG, by using environment variables is that the setting
might be correct for one command but could be incorrect for a subsequent command. For
example, if you forget to reset the environment variable, you might inadvertently pass the
wrong value to the next command.
If you omit any required parameters to the command, the utility displays an error
message describing the missing parameters. For example, if you omit the --host
option (which corresponds to the EDGE_SERVER environment variable), apigee-
adminapi.sh responds with the following error:
If you receive an HTTP STATUS CODE: 401 error, then you entered the wrong
password.
/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh
If you press the tab key after typing apigee-adminapi.sh, you will see the list of
possible options:
The tab key displays options based on the context of the command. If you enter the
tab key after typing:
/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs
You will see the possible options for completing the orgs command:
Use the -h option to display help for any command. For example, if you use the -h
option as shown below:
/opt/apigee/apigee-adminapi/bin/apigee-adminapi.sh orgs -h
The utility displays complete help information for all possible options to the orgs
command. The first item in the output shows the help for the orgs add command:
+++++++++++++++++++++++++++++++++++++++++++
orgs add
Required:
-o ORG Organization name
Optional:
-H HEADER add http header in request
--admin ADMIN_EMAIL admin email address
--pwd ADMIN_PASSWORD admin password
--host EDGE_SERVER edge server to make request to
--port EDGE_PORT port to use for the http request
--ssl set EDGE_PROTO to https, defaults to http
--debug ( set in debug mode, turns on verbose in curl )
-h Displays Help
Or, you can pass a file containing the same information as would be contained in
the request body of the POST. For example, the following command takes a file
defining the virtual host:
Where the file vhostcreate contains the POST body of the call. In this example, it
is an XML-formatted request body:
<VirtualHost name="myVHostUtil">
<HostAliases>
<HostAlias>192.168.56.101:9005</HostAlias>
</HostAliases>
<Interfaces/>
<Port>9005</Port>
</VirtualHost>
For example, the following command uses the --debug option. The results display
the underlying curl command's output in verbose mode:
● Pods:
● Gateway pod name (default "gateway")
● Central pod name (default "central")
● Analytics pod name (default "analytics")
● Region name (default "dc-1")
● Administrative user ID and password
Other important information, such as the Apigee version number or the unique
identifiers (UUIDs) of an Apigee component, can be found as follows:
● Use the following command to display the status of all Edge components
installed on the node:
● /opt/apigee/apigee-service/bin/apigee-all status
● Use the following command to display the version numbers of all Edge
components installed on the node:
● /opt/apigee/apigee-service/bin/apigee-all version
● Use the following API call to display the UUID of installed Edge components:
● curl http://management_server_IP:port_num/v1/servers/self -u
adminEmail:password
● Where management_server_IP is the IP address or DNS name of the
Management server node, and port_num is one of the following:
● 8081 for the Router
● 8082 for the Message Processor
● 8083 for Qpid
● 8084 for Postgres
● OpenLDAP
● Apigee Edge system administrator
● Edge organization user
● Cassandra
Instructions on resetting each of these passwords are included in the sections that
follow.
1. On the Management Server node, run the following command to create the
new OpenLDAP password:
/opt/apigee/apigee-service/bin/apigee-service apigee-openldap \
/opt/apigee/apigee-service/bin/apigee-service edge-management-server \
4. store_ldap_credentials -p NEW_PASSWORD
5. This command restarts the Management Server.
Note: You must set identical new passwords on both LDAP servers in order to achieve
successful OpenLDAP replication.
● Management Server
● UI
Note: You cannot reset the system admin password from the Edge UI. You must use the
procedure below to reset it.Note: If you have enabled TLS on the Edge UI, you must also re-
enable it as part of changing the sys admin password.Caution: You should stop the Edge UI
before resetting the system admin password. Because you reset the password first on the
Management Server, there can be a short period of time when the UI is still using the old
password. If the UI makes more than three calls using the old password, the OpenLDAP server
locks out the system admin account for three minutes.
1. Edit the silent config file that you used to install the Edge UI to set the
following properties:
APIGEE_ADMINPW=NEW_PASSWORD
SMTPHOST=smtp.gmail.com
SMTPPORT=465
SMTPUSER=foo@gmail.com
SMTPPASSWORD=bar
SMTPSSL=y
<User id="admin">
<Password><![CDATA[password]]></Password>
<FirstName>first_name</FirstName>
<LastName>last_name</LastName>
<EmailId>email_address</EmailId>
10.</User>
11.On the Management Server, execute the following command:
curl -u "admin_email_address:admin_password" -H \
12.
13.Where your_data_file is the file you created in the previous step.
Edge updates your admin password on the Management Server.
14.Delete the XML file you created. Passwords should never be permanently
stored in clear text.
For example:
cp ~/Documents/tmp/hybrid_root/apigeectl_beta2_a00ae58_linux_64/README.md
~/Documents/utilities/README.md
Note: If you omit the admin email, then it uses the default admin email set at installation time.
If you omit the admin password, then the script prompts for it. All other parameters must be set
on the command line, or in a config file.
Shown below is an example config file that you can use with the "-f" option:
USER_NAME=user@myCo.com
USER_PWD="Foo12345"
APIGEE_ADMINPW=ADMIN_PASSWORD
You can also use the Update user API to change the user password.
You can then set password strength ratings by grouping different combinations of
regular expressions. For example, you can determine that a password with at least
one uppercase and one lowercase letter gets a strength rating of "3", but that a
password with at least one lowercase letter and one number gets a stronger rating
of "4".
Note: By default, the Edge UI auto generates passwords that are compatible with the default
password rules defined on the Edge Management Server. If you change these rules, you should
also update the Edge UI so that it auto generates user passwords appropriately. See Auto-
generate Edge UI passwords for more.
Property Description
● Set the password on any one Cassandra node and it will be broadcast to all
Cassandra nodes in the ring
● Update the Management Server, Message Processors, Routers, Qpid servers,
and Postgres servers on each node with the new password
1. Log into any one Cassandra node using the cqlsh tool and the default
credentials. You only have to change the password on one Cassandra node
and it will be broadcast to all Cassandra nodes in the ring:
2. /opt/apigee/apigee-cassandra/bin/cqlsh cassIP 9042 -u cassandra -p
'cassandra'
3. Where:
● cassIP is the IP address of the Cassandra node.
● 9042 is the Cassandra port.
● The default user is cassandra.
● The default password is 'cassandra'. If you changed the password
previously, use the current password. If the password contains any
special characters, you must wrap it in single quotes.
4. Execute the following command as the cqlsh> prompt to update the
password:
5. ALTER USER cassandra WITH PASSWORD 'NEW_PASSWORD';
6. If the new password contains a single quote character, escape it by
preceding it with a single quote character.
7. Exit the cqlsh tool:
8. exit
9. On the Management Server node, run the following command:
10./opt/apigee/apigee-service/bin/apigee-service edge-management-server
store_cassandra_credentials -u CASS_USERNAME -p 'CASS_PASSWORD'
11.Optionally, you can pass a file to the command containing the new username
and password:
12.apigee-service edge-management-server store_cassandra_credentials -f
configFile
13.Where the configFile contains the following:
CASS_USERNAME=CASS_USERNAME
14.CASS_PASSWORD='CASS_PASSWROD'
15.This command automatically restarts the Management Server.
16.Repeat step 4 on:
● All Message Processors
● All Routers
● All Qpid servers (edge-qpid-server)
● Postgres servers (edge-postgres-server)
Change the password on all Postgres master nodes. If you have two Postgres
servers configured in master/standby mode, then you only have to change the
Password on the master node. See Set up master-standby replication for Postgres
for more.
Note: After you change the apigee and postgres user passwords in the procedure below, no
data is written to the PostgreSQL database until you complete all the other steps.
This section describes how to configure the delivered default LDAP password
policy. Use this password policy to configure various password authentication
options, such as the number of consecutive failed login attempts after which a
password can no longer be used to authenticate a user to the directory.
This section also describes how to use a couple of APIs to unlock user accounts
that have been locked according to attributes configured in the default password
policy.
For additional information, see:
● LDAP policy
● "Important information about your password policy" in the Apigee
Community
1. Connect to your LDAP server using an LDAP client, such as Apache Studio or
ldapmodify. By default OpenLDAP server listens on port 10389 on the
OpenLDAP node.
To connect, specify the Bind DN or user of
cn=manager,dc=apigee,dc=com and the OpenLDAP password that you
set at the time of Edge installation.
2. Use the client to navigate to the password policy attributes for:
● Edge users: cn=default,ou=pwpolicies,dc=apigee,dc=com
● Edge sysadmin:
cn=sysadmin,ou=pwpolicies,dc=apigee,dc=com
3. Edit the password policy attribute values as desired.
4. Save the configuration.
If
pwdFailureCountInterval
is set to 0, only a successful
authentication can reset the
counter.
If
pwdFailureCountInterval
is set to >0, the attribute
defines a duration after which
the count of consecutive failed
login attempts is automatically
reset, even if no successful
authentication has occurred.
If pwdLockoutDuration is
set to 0, the user account will
remain locked until a system
administrator unlocks it.
If pwdLockoutDuration is
set to >0, the attribute defines
a duration for which the user
account will remain locked.
When this time period has
elapsed, the user account will
be automatically unlocked.
To unlock a user:
/v1/users/userEmail/status?action=unlock -X POST -u adminEmail:password
Note: To ensure that only system administrators can call this API, the /users/*/status path
is included in the conf_security_rbac.restricted.resources property for the
Management Server:
cd /opt/apigee/edge-management-server
grep -ri "conf_security_rbac.restricted.resources" *
token/default.properties:conf_security_rbac.restricted.resources=/environments,/
environments/*,/environments/*/virtualhosts,/environments/*/virtualhosts/*,/pods,/
environments/*/servers,/rebuildindex,/users/*/status
Note: These events are triggered when modifying users in the Edge management UI. If you use
a script to add or modify a user, no email is sent.
The events where an email is sent, and the default template file for the generated
email, are:
Note: The next installation of Edge overwrites these files, so make sure to backup any
customizations that you make.
Note: Use the SMTPMAILFROM property in the installation configuration file to set the "From"
field at install time. The SMTPMAILFROM property is required at install time. See Edge
Configuration File Reference for more.
The email subjects are customizable by editing the following properties in
/opt/apigee/customer/application/ui.properties (if that file does not
exist, create it):
● conf_apigee-base_apigee.emails.passwordreset.subject
● conf_apigee-base_apigee.emails.useraddedexisting.subject
● conf_apigee-base_apigee.emails.useraddednew.subject
conf_apigee_apigee.branding.companynameshort="My Company"
At install time, you can specify the SMTP information. To change the SMTP
information, or if you deferred SMTP configuration at the time of installation, you
can use the apigee-setup utility to modify the SMTP configuration.
1. Edit the configuration file that you used to install Edge, or create a new
configuration file with the following information:
2. # Must include the sys admin password
3. APIGEE_ADMINPW=sysAdminPW
4. # SMTP settings
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com # 0 for no username
SMTPPASSWORD=smtppwd # 0 for no password
SMTPSSL=n
SMTPPORT=25
SMTPMAILFROM="My Company <myco@company.com>"
5. Run the apigee-setup utility on all the Edge UI nodes to update the SMTP
settings:
6. /opt/apigee/apigee-setup/bin/setup.sh -p ui -f configFile
7. The apigee-setup utility restarts the Edge UI.
https://domain/login
The domain is determined automatically by Edge and is typically correct for the
organization. However, there is a chance that when the Edge UI is behind a load
balancer that the automatically generated domain name is incorrect. If so, you can
use the conf_apigee_apigee.emails.hosturl property to explicitly set the
domain name portion of the generated URL.
1. Open the ui.properties file in an editor. If the file does not exist, create
it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_apigee_apigee.emails.hosturl. For example, set
conf_apigee_apigee.emails.hosturl as:
4. conf_apigee_apigee.emails.hosturl="http://myCo.com"
5. Save your changes.
6. Make sure the properties file is owned by the 'apigee' user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
Note: Each component writes log information to its own file. See Setting log file location for
information on log file locations.
● ALL
● DEBUG
● ERROR
● FATAL
● INFO
● OFF
● TRACE
● WARN
To set the log level for a component, edit the component's properties file to set a
token, then restart the component:
1. Open /opt/apigee/customer/application/ui.properties in an
editor. If the file does not exist, create it.
2. Set the following properties in ui.properties to the desired log level. For
example, to set it to DEBUG:
conf_logger_settings_application_log_level=DEBUG
3. conf_logger_settings_play_log_level=DEBUG
4. Save the file.
5. Ensure the properties file is owned by the "apigee" user:
6. chown apigee:apigee /opt/apigee/customer/application/ui.properties
7. Restart the Edge UI:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
1. Open
/opt/apigee/customer/application/component_name.propertie
s in an editor, where component_name can be one of the following:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
2. If the file does not exist, create it.
3. Set the following property in in the properties file to the desired log level. For
example, to set it to DEBUG:
4. conf_system_log.level=DEBUG
5. Save the file.
6. Ensure the properties file is owned by the "apigee" user:
7. chown apigee:apigee
/opt/apigee/customer/application/component_properties_file_name.properti
es
8. Restart the component by using the following syntax:
9. /opt/apigee/apigee-service/bin/apigee-service component_name restart
For more information about the name and location of the component configuration
files, see Location of properties files.
On this page
Enable debug logging for Apigee SSO
You can enable debug logging for several Edge components. When you enable
debug logging, the component writes debug messages to its system.log file.
For example, if you enable debug logging for the Edge Router, the Router writes
debug messages to:
/opt/apigee/var/log/edge-router/logs/system.log
Note: Enabling debug logging can cause the system.log file for a component to grow very
large in size, and can affect the overall system performance. It is recommended that you enable
debug logging for only as long as necessary, and then disable it.
Use the following API call to enable debug logging for a component:
Warning: Notice that these calls do not take credentials and refer to localhost. These API
calls must be run on the server hosting the Edge components; they cannot be run remotely.
Each Edge component that supports debug logging uses a different port number in
the API call, as shown in the following table:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
In addition to this list, Edge also logs the setup process in the
/opt/apigee/var/log/apigee-setup directory.
Use the following procedure to change the default log file location for all Edge
components except the Edge UI:
APIGEE_APP_LOGDIR=/directory/path
LOG_FILE=${APIGEE_APP_LOGDIR}/apigee-qpidd.log
4.
5. Where /directory/path specifies the directory for the log files.Make sure the
directory is accessible by the apigee user:
6. chown apigee:apigee /directory/path
7. Restart the component:
8. /opt/apigee/apigee-service/bin/apigee-service component_name restart
For more information about the name and location of the component configuration
files, see Location of properties files.
Setting the session timeout in the Edge
UI
By default, a session in the Edge UI expires one day after login. That means Edge UI
users have to log in again after the session expires.
1. Open the ui.properties file in an editor. If the file does not exist, create
it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_application_session.maxage as a time value and time unit.
Time units can be:
● m, minute, minutes
● h, hour, hours
● d, day, days
4. For example, set conf_application_session.maxage as:
5. conf_application_session.maxage="2d"
6. Save your changes.
7. Restart the Edge UI:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
1. Ensure that the desired port number is open on the Edge node.
2. Open /opt/apigee/customer/edge-ui.d/ui.sh in an editor. If the file
and directory does not exist, create it.
3. Set the following property in ui.sh:
4. export JAVA_OPTS="-Dhttp.address=IP -Dhttp.port=PORT"
5. Where IP is the IP address of the Edge UI node, and PORT is the new port
number.
6. Save the file.
7. Restart the Edge UI:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
You can now access the Edge UI on the new port number.
UI modifications
You can make minor changes to the Edge UI to improve the experience of your
developer community. For example, you can change the following:
Changing these settings will give your developers a more cohesive experience and
help ensure that issues are routed to the proper place.
You make these changes by editing properties files using the Code with config
(CwC) technique.
You should change the default value of the "support team" contact to point to an
active resource within your organization that can provide your developers with
timely assistance.
The next time your developers experience an internal error, the "contact team" link
will send an email to your internal support organization.
conf_apigee-base_apigee.feature.enableconsentbanner="true"
conf_apigee-base_apigee.consentbanner.buttoncaption="I Agree"
The next time your developers open the Edge UI in a browser, they must accept the
consent agreement before they can log in.
The Edge UI displays the DevPortal button on the Publish > Developers page that,
when clicked, opens the portal associated with an organization. By default, that
button opens the following URL:
http://live-orgname.devportal.apigee.com
You can set this URL to a different URL, for example if your portal has a DNS record,
or disable the button completely. Use the following properties of the organization
to control the button:
You set these properties separately for each organization. To set these properties,
you first use the following API call to determine the current property settings on the
organization:
"createdAt" : 1428676711119,
"createdBy" : "me@my.com",
"displayName" : "orgname",
"environments" : [ "prod" ],
"lastModifiedAt" : 1428692717222,
"lastModifiedBy" : "me@my.com",
"name" : "organme",
"properties" : {
"property" : [ {
"name" : "foo",
"value" : "bar"
}]
},
"type" : "paid"
Note any existing properties in the properties area of the object. When you set
properties on the organization, the value in properties overwrites any current
properties. Therefore, if you want to set features.devportalDisabled or
features.devportalUrl in the organization, make sure you copy any existing
properties when you set them.
http://ms_IP:8080/v1/organizations/orgname \
-d '{
"displayName" : "orgname",
"name" : "orgname",
"properties" : {
"property" : [
"name" : "foo",
"value" : "bar"
},
"name": "features.devportalUrl",
"value": "http://dev-orgname.devportal.apigee.com/"
},
"name": "features.devportalDisabled",
"value": "false"
}'
In the PUT call, you only have to specify displayName, name, and properties.
Note that this call incudes the "foo" property that was originally set on the
organization.
● The Trace tool in the Edge UI has the ability to send and receive API request
to any specified URL. In certain deployment scenarios where Edge
components are co-hosted with other internal services, a malicious user may
misuse the power of the Trace tool by making requests to private IP
addresses.
● When creating an API proxy from an OpenAPI specification, the specification
describes such elements of an API as its base path, paths and verbs,
headers, and more. As part of the spec, a malicious user can specify a base
path of the proxy that refers to a private IP address.
● When creating an API proxy from a WSDL file located on your local file
system.
For security reasons, by default, the Edge UI is prevented from referencing private
IP addresses. The list of private IP addresses includes:
If you want to enable the Edge UI to access private IP addresses, set the following
tokens:
Note: If the Apigee Routers are reachable only over the above private IP ranges, Apigee
recommends that you set the conf_apigee-
base_apigee.feature.enabletraceforinternaladdresses property to true.
1. Open the ui.properties file in an editor. If the file does not exist, create
it.
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following properties to true:
conf_apigee-base_apigee.feature.enabletraceforinternaladdresses="true"
conf_apigee-base_apigee.feature.enableopenapiforinternaladdresses="true"
4. conf_apigee-base_apigee.feature.enablewsdlforinternaladdresses="true"
5. Save your changes to ui.properties.
6. Make sure the properties file is owned by the 'apigee' user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
1. Open the ui.properties file in an editor. If the file does not exist, create
it.
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following property to "true":
4. conf_apigee-base_apigee.feature.enableforcerangelimit="false"
5. Save your changes to ui.properties.
6. Restart the Edge UI:
7. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
To use geo aggregation, you must purchase the Maxmind GeoIp2 database that
contains this geographical information. See https://www.maxmind.com/en/geoip2-
databases for more information.
Note: Geo aggregations require that the IP address of a machine making the request to the API
proxy is in the Maxmind GeoIp2 database. Otherwise, Edge will not be able to determine the
location of the request.
● On all Qpid servers, install the MaxMind database and configure the Qpid
server to use it.
● Enable the Geo Map display in the Edge UI.
Use the following procedure to install the MaxMind database on all Edge Qpid
servers:
1. Obtain the Maxmind GeoIp2 database.
Note: Geo aggregations work by adding geographical data from a third-party database
to the analytics data collected by Edge. To use geo aggregation, you must purchase the
Maxmind GeoIp2 database that contains this information. See
https://www.maxmind.com/en/geoip2-databases for more information.
2. Create the following folder on the Qpid server node:
3. /opt/apigee/maxmind
4. Download the Maxmind GeoIp2 database to /opt/apigee/maxmind.
5. Change the ownership of the database file to the 'apigee' user:
6. chown apigee:apigee /opt/apigee/maxmind/GeoIP2-City_20160127.mmdb
7. Set the permissions on the database to 744:
8. chmod 744 /opt/apigee/maxmind/GeoIP2-City_20160127.mmdb
9. Set the following tokens in
/opt/apigee/customer/application/qpid-server.properties. If
that file does not exist, create it:
conf_ingestboot-service_vdim.geo.ingest.enabled=true
10.conf_ingestboot-service_vdim.geo.maxmind.db.path=/opt/apigee/
maxmind/GeoIP2-City_20160127.mmdb
11.If you stored the Maxmind GeoIp2 database in a different location, edit the
path property accordingly.
Note that this database file contains a version number. If you later receive an
updated database file, it might have a different version number. As an
alternative, create a symlink to the database file and use the symlink in
qpid-server.properties.
For example, create a symlink for "GeoIP2-City-current.mmdb" to "GeoIP2-
City_20160127.mmdb". If you later receive a new database with a different
file name, you only have to update the symlink rather than having to
reconfigure and restart the Qpid server.
12.Change ownership of the qpid-server.properties file to the 'apigee'
user:
13.chown apigee:apigee /opt/apigee/customer/application/qpid-
server.properties
14.Restart the Qpid server:
15./opt/apigee/bin/apigee-service/bin/apigee-service edge-qpid-server restart
16.Repeat this process on every Qpid node.
17.To validate that geo aggregation is working:
a. Trigger several API proxy calls on a sample API proxy.
b. Wait about 5 - 10 mins for the aggregations to complete.
c. Open a console and connect to the Edge Postgres server:
d. psql -h /opt/apigee/var/run/apigee-postgresql/ -U apigee -d apigee
e. Perform a SELECT query on the table analytics.agg_geo to show the
rows with geographical attributes:
f. select * from analytics.agg_geo;
g. You should see the following columns in the query results which are
extracted from the Maxmind GeoIp2 database: ax_geo_city,
ax_geo_country, ax_geo_continent, ax_geo_timezone,
ax_geo_region.
If the agg_geo table is not getting populated, check the Qpid server
logs at /opt/apigee/var/log/edge-qpid-server/logs/ to
detect any potential exceptions.
Use the following procedure to enable Geo Maps in the Edge UI:
1. Open the ui.properties file in an editor. If the file does not exist, create
it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_apigee-base_apigee.passwordpolicy.pwdhint. For
example, set conf_apigee-base_apigee.passwordpolicy.pwdhint
as:
conf_apigee-base_apigee.passwordpolicy.pwdhint="Password must be 13
characters long and
The Edge UI auto generates user passwords for new users. Users are typically then
sent an email that allows them to change the auto generated password.
By default, the Edge UI auto generates passwords that are compatible with the
default password rules defined on the Edge Management Server. These rules specify
that passwords are at least eight characters long and include at least one special
character and one upper-case character. See Resetting Edge passwords for
information on how to configure Edge password rules.
You can change the password rules on the Management Server to make passwords
more strict. For example, you can increase the minimum length to 10 characters,
require multiple special characters, etc. If you change the rules on the Management
Server, you also have to change the Edge UI to auto generate passwords compatible
with the new rules.
The Edge UI define four properties that you use to set the rules for auto generating
password:
conf_apigee-base_apigee.forgotPassword.underscore.minimum="1"
conf_apigee-base_apigee.forgotPassword.specialchars.minimum="1"
conf_apigee-base_apigee.forgotPassword.lowecase.minimum="1"
conf_apigee-base_apigee.forgotPassword.uppercase.minimum="1"
NOTE: There is no property to set the minimum password length. The Edge UI auto generates
very long passwords that exceed any minimum password length that you set on the
Management Server.
1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set the properties:
conf_apigee-base_apigee.forgotPassword.underscore.minimum="2"
conf_apigee-base_apigee.forgotPassword.specialchars.minimum="3"
conf_apigee-base_apigee.forgotPassword.lowecase.minimum="2"
4. conf_apigee-base_apigee.forgotPassword.uppercase.minimum="2"
5. Save your changes.
6. Make sure the properties file is owned by the 'apigee' user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
By default, when a user logs out of the Edge UI, the session cookie for the user is
deleted. However, while the user is logged in, malware or other malicious software
running on the user's system could obtain the cookie and use it to access the Edge
UI. This situation is not specific to the Edge UI itself, but with the security of the
user's system.
As an added level of security, you can configure the Edge UI to store information
about current sessions in server memory. When the user logs out, their session
information is deleted, preventing another user from using the cookie to access the
Edge UI.
Note: If the Edge UI server ever goes down, all session information stored in memory is lost and
all users must log in again after the server comes back up.
This features is disabled by default. Before you enable this feature, your system
must meet one of the following requirements:
If your system meets these requirements, then use the following procedure to enable
the Edge UI to track user sessions in memory:
1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set the following properties in:
conf_apigee_apigee.feature.expireSessionCookiesInternally="true"
4. conf_apigee_apigee.feature.trackSessionCookies="true"
5. Save your changes.
6. Ensure the properties file is owned by the "apigee" user:
7. chown apigee:apigee /opt/apigee/customer/application/ui.properties
8. Restart the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
By default, a session in the Edge UI expires one day after login. That means Edge UI
users have to log in again after the session expires.
1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /opt/apigee/customer/application/ui.properties
3. Set conf_application_session.maxage as a time value and time unit.
Time units can be:
● m, minute, minutes
● h, hour, hours
● d, day, days
For example, set conf_application_session.maxage as:
● conf_application_session.maxage="2d"
4. Save your changes.
5. Restart the Edge UI:
6. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
Specifically, you must configure the two UI instances to have the same value for the
following properties:
application.secret=value
mail.smtp.credential=value
apigee.mgmt.credential=value
Additionally, if you configure them to use TLS, then you must ensure that you use the
same cert and key on both instances.
3. apigee.mgmt.credential="value"
4. Open /opt/apigee/edge-ui/conf/application.conf in an editor and
copy the values of the following property for later use:
5. application.secret="value"
6. Log in to the node hosting the second Edge UI instances.
7. Open /opt/apigee/customer/application/ui.properties on the
second UI instance in an editor. If the file does not exist, create it.
8. Add the following properties to
/opt/apigee/customer/application/ui.properties, including the
values that you copied from the first UI instance:
conf_application_application.secret="value"
conf_apigee_mail.smtp.credential="value"
9. conf_apigee_apigee.mgmt.credential="value"
10. Notice how you prefix these values with either conf_application_ or
conf_apigee_.
11. Save the file.
12. Make sure /opt/apigee/customer/application/ui.properties is
owned by the "apigee" user:
13. chown apigee:apigee /opt/apigee/customer/application/ui.properties
14. Restart the second UI instance:
15. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
For example, to upgrade a Message Processor, you can use the following procedure:
NOTE: In some configurations there may be a number of Routers behind a Load Balancer (ELB).
The ELB might be configured to monitor port 15999 on the Routers. If the load balancer cannot
access port 15999 on the Router it assumes the Router is down and you can restart the Router.
To disable reachability on Message Processor, you can just stop the Message
Processor:
The Message Processor first processes any pending messages before it shuts
down. Any new requests are routed to other available Message Processors.
The wait_for_ready command returns the following message when the Message
Processor is ready to process messages:
In a production environment, you typically have a load balancer in front of the Edge
Routers. Load balancers monitor port 15999 on the Routers to ensure that the Route
is available.
Configure the load balancer to perform an HTTP or TCP health check on the Router
using the following URL:
http://router_IP:15999/v1/servers/self/reachable
This URL returns an HTTP 200 response code if the Router is reachable.
To make a Router unreachable, you can block port 15999 on the Router. If the load
balancer is unable to access the Router on port 15999 it no longer forwards requests
to the Router. For example, you can block the port by using the following iptables
command on the Router node:
sudo iptables -F
You might be using iptables to manage other ports on the node so you have to take
that into consideration when you flush iptables or use iptables to block port 15999. If
you are using iptables for other rules, you can use the -D option to reverse the
specific change:
NOTE: As an alternative to checking reachability on the Router on port 15999, you can check the
port on individual virtual hosts. For example, if you have a virtual host on port 9001, make a TCP
check on that port. To make that port unreachable, you can use the following iptables command:
● http://router_IP:15999/v1/servers/self/reachable
● Readyness: Determines if a router can process customer requests for a
particular environment.
For example:
To check both a router and MP pool's availability:
● http://router_IP:15999/{org}__{env}
● To get the status of a Router, make a request to port 8081 on the Router:
● curl -v http://router_IP:8081/v1/servers/self/up
● If the Router is up, the request returns "true" in the response and HTTP 200.
Note that this call only checks if the Router is up and running. Control of the
Router's reachability from a load balancer is determined by port 15999.
To get the status of a Message Processor:
● curl http://Message_Processor_IP:8082/v1/servers/self/up
# Request buffers
# default:
# conf_load_balancing_load.balancing.driver.large.header.buffers=8 16k
# new value:
conf_load_balancing_load.balancing.driver.large.header.buffers=8 32k
# Response buffers
# default:
# conf_load_balancing_load.balancing.driver.proxy.buffer.size=64k
# new value:
conf_load_balancing_load.balancing.driver.proxy.buffer.size=128k
For
conf_load_balancing_load.balancing.driver.large.header.buffers,
the first parameter specifies the number of buffers, and the second the size of each
buffer. The buffers are allocated dynamically and released after usage. These
settings are only used if the request header is more than 1 KB. For requests that
have header request URI less than 1 KB, the large buffers won't even be used.
For conf_load_balancing_load.balancing.driver.proxy.buffer.size,
specify the size of the response buffer.
The Edge Router is implemented using NGINX. For more on these properties, see:
● conf_load_balancing_load.balancing.driver.large.header.buffers
● conf_load_balancing_load.balancing.driver.proxy.buffer.size
conf/http.properties+HTTPRequest.line.limit=7k
conf/http.properties+HTTPRequest.headers.limit=25k
conf/http.properties+HTTPResponse.line.limit=2k
conf/http.properties+HTTPResponse.headers.limit=25k
Note: The property syntax is different for the Message Processor because the properties are
commented out by default.
You must restart the Message Processor after changing these properties:
TIP: You can check if IPv6 is already enabled by logging into the Router and searching for this
property, as the following example shows:
/opt/apigee/apigee-service/bin/apigee-service edge-router configure
--search
conf_load_balancing_load.balancing.driver.nginx.listen.endpoint.all.interfaces.enabled
13. [load.balancing.driver.nginx.listen.endpoint.all.interfaces.enabled=true]]
14. Configure your router’s IP address and ports so that incoming API requests
resolve to the IPv6 address/ports. Note that incoming API requests must be
sent from IPv6 enabled clients.
15. Test your configuration by making an API request to your Router's IPv6
address and port.
You can configure the Router timeout when accessing Message Processors as part
of an API proxy request.
The Edge Router has a default timeout of 57 seconds when attempting to access a
Message Processor as part of handling a request through an API proxy. After that
timeout expires, the Router attempts to connect to another Message Processor, if
one is available. Otherwise, it returns an error.
Property Description
conf_load_balancing_load.balancing.driver.proxy.read.timeout
Specifies the wait time for a single Router. The default value is 57
seconds.
You can set the time interval as something other than seconds by
using the following notation:
ms: milliseconds
s: seconds (default)
m: minutes
h: hours
d: days
w: weeks
M: months (length of 30 days)
For example, to set the wait time to 2 hours, you can use either of
the following values:
conf_load_balancing_load.balancing.driver.proxy.read.timeout=2h
# 2 hours
OR
conf_load_balancing_load.balancing.driver.proxy.read.timeout=12
0m # 120 minutes
conf_load_balancing_load.balancing.driver.nginx.upstream_next
_timeout
Specifies the total wait time for all Message Processors when
your Edge installation has multiple Message Processors. It has a
default value of the current value of
conf_load_balancing_load.balancing.driver.proxy.
read.timeout, or 57 seconds.
As with the
conf_load_balancing_load.balancing.driver.proxy.
read.timeout property, you can specify time intervals other
than the default (seconds).
To set retry options, use the RetryOption property as described in virtual host
configuration properties.
To use an HTTP forward proxy between Edge and the TargetEndpoint, you must
configure the outbound proxy settings on the Message Processors (MPs). These
properties configure the MPs to route target requests from Edge to the HTTP
forward proxy.
Property Description
<HTTPTargetConnection>
<Properties>
<Property name="use.proxy.host.
header.with.target.uri">true
</Property>
</Properties>
<URL>https://mocktarget.apigee.net/
my-target</URL>
</HTTPTargetConnection>
For example:
conf_http_HTTPClient.use.proxy=true
conf_http_HTTPClient.use.tunneling=false
conf/http.properties+HTTPClient.proxy.type=HTTP
conf/http.properties+HTTPClient.proxy.host=my.host.com
conf/http.properties+HTTPClient.proxy.port=3128
conf/http.properties+HTTPClient.proxy.user=USERNAME
conf/http.properties+HTTPClient.proxy.password=PASSWORD
If forward proxying is configured for the MP, then all traffic going from API proxies to
backend targets goes through the specified HTTP forward proxy. If the traffic for a
specific target of an API proxy should go directly to the backend target, bypassing
the forward proxy, then set the following property in the TargetEndpoint to override
the HTTP forward proxy:
<Property name="use.proxy">false</Property>
To disable forward proxying for all targets by default, set the following property in
your message-processor.properties file:
conf_http_HTTPClient.use.proxy=false
Then set use.proxy to "true" for any TargetEndpoint that you want to go through an
HTTP forward proxy:
<Property name="use.proxy">true</Property>
By default Edge uses tunneling for the traffic to the proxy. To disable tunneling by
default, set the following property in the message-processor.properties file:
conf_http_HTTPClient.use.tunneling=false
If you want to disable tunneling for a specific target, then set the
use.proxy.tunneling property in the TargetEndpoint. If the target uses TLS/SSL,
then this property is ignored, and the message is always sent via a tunnel:
<Property name="use.proxy.tunneling">false</Property>
For Edge itself to act as the forward proxy - receiving requests from the backend
services and routing them to the internet outside of the enterprise - first set up an
API proxy on Edge. The backend service can then make a request to the API proxy,
which can then connect to external services.
Use the following properties to change the limits on the Router, Message Processor,
or both. Both properties have a default value of "10m" corresponding to 10MB:
● conf_http_HTTPRequest.body.buffer.limit
● conf_http_HTTPResponse.body.buffer.limit
6. conf_http_HTTPResponse.body.buffer.limit=15m
7. Save your changes.
8. Make sure the properties file is owned by the "apigee" user:
Types of cache
● Level 1 (L1): In-memory cache, which has faster access but less available
storage capacity.
● Level 2 (L2): Persistent cache in a Cassandra data store, which has slower
access but more available storage capacity.
When a data entry in L1 cache reaches the L1 TTL, it is deleted. However, a copy of
the entry is kept in the L2 cache (which has a longer TTL than L1 cache), where it
remains accessible to other message processors. See In-memory and persistent
cache levels for more details about cache.
Max L1 TTL
In Edge for Private Cloud, you can set the maximum L1 cache TTL for each message
processor using the Max L1 TTL property
(conf_cache_max.l1.ttl.in.seconds). An entry in the L1 cache will expire
after reaching the Max L1 TTL value and be deleted.
Note: In Edge for Public Cloud, you can only set the TTL for L2 cache.
Notes:
● By default, Max L1 TTL is disabled (with the value -1), in which case the TTL
of an entry in L1 cache is determined by the PopulateCache policy's expiry
settings (for both L1 and L2 cache).
● Max L1 TTL only has an effect if its value is smaller than the overall cache
expiry.
● RPC misses: If you are noticing remote procedure call (RPC) misses between
message processors (MPs)— especially across multiple data centers— it's
possible that the L1 cache has stale entries, which will remain stale until they
are deleted from L1 cache. Setting Max L1 TTL to a lower value forces any
stale entries to be removed, and replaced with fresh values from L2 cache,
sooner.
Solution: Decrease conf_cache_max.l1.ttl.in.seconds.
● Excessive load on Casandra: When you set a value for Max L1 TTL, L1 cache
entries will expire more often, leading to more L1 cache misses and more L2
cache hits. Since L2 cache will be hit more often, Cassandra will incur an
increased load.
Solution: Increase conf_cache_max.l1.ttl.in.seconds
As a general rule, tune Max L1 TTL to a value that balances the frequency of RPC
misses between MPs with the potential load on Cassandra.
Alternatively, you can configure the Message Processor to write analytics data to
disk. Then, you can upload that data to your own analytics system for analysis. For
example, you could upload the data to Google Cloud BigQuery. You can then take
advantage of the powerful query and machine learning capabilities offered by
BigQuery and TensorFlow to perform your own data analysis.
You can also choose to use both options. That means you can upload the analytics
data to Qpid/Postgres and also save the data to disk.
/opt/apigee/var/log/edge-message-processor/ax/tmp
Edge creates a new directory under /tmp for the data files, at one minute intervals.
The format of the directory name is:
org~env~yyyyMMddhhmmss
For example:
myorg~prod~20190909163500
myorg~prod~20190909163600
Each directory contains a .gz file with the individual data files for that interval. The
format of the .gz file name is:
4DigitRandomHex_StartTime.StartTimePlusInterval_internalHostIP_hostUUID_writer_i
ndex.txt.gz
At regular intervals, Edge moves the directory and the .gz file it contains from /tmp
to either of the following directories, based on the setting of the uploadToCloud
Message Processor configuration property:
Unzip the data from either the /staging or /failed directory to obtain the
analytics data files.
Configuration properties
Use the following properties to configure the Message Processor to write analytics
data to disk. All of these properties are optional:
Property Description
11. -u sysAdminEmail:sysAdminPWord
12. By default, the name of the analytics group is axgroup-001. In the config file
for an Edge installation, you can set the name of the analytics group by using
the AXGROUP property. If you are unsure of the names of the analytics group,
run the following command on the Management Server node to display it:
apigee-adminapi.sh analytics groups list \
This section provides the default and recommended Java heap memory sizes, as
well as the process for changing the defaults. Lastly, this section describes how to
change other JVM settings using properties files.
The following table lists the default and recommended Java heap memory sizes for
Java-based Private Cloud components:
Default Recommended
Properties File
Component
Name Heap Size Heap Size
Runtime
Analytics
Management
1
Cassandra dynamically calculates the maximum heap size when it starts up.
Currently, this is one half the total system memory, with a maximum of 8192MB.
For information on setting the heap size, see Change heap memory size.
2
For Message Processors, Apigee recommends that you set the heap size to
between 3GB and 6GB. Increase the heap size beyond 6GB only after conducting
performance tests.
If the heap usage nears the max limit during your performance testing, then
increase the max limit. For information on setting the heap size, see Change heap
memory size.
3
Not all Private Cloud components are implemented in Java. Because they are not
Java-based, apps running natively on the host platform do not have configurable
Java heap sizes; instead, they rely on the host system for memory management.
To determine how much total memory Apigee recommends that you allocate to your
Java-based components on a node, add the values listed above for each component
on that node. For example, if your node hosts both the Postgres and Qpid servers,
Apigee recommends that your combined memory allocation be between 2.5GB and
4.5GB.
To change heap memory settings, edit the properties file for the component. For
example, for the Message Processor, edit the
/opt/apigee/customer/application/message-processor.properties
file.
The following table lists the properties that you edit to change heap sizes:
Property Description
bin_sete The default class metadata size. The default value is set
nv_meta_ to the value of bin_setenv_max_permsize, which
space_si defaults to 128 MB. On the Message Processor, Apigee
ze recommends that you set this value to 256 MB or 512 MB,
depending on your traffic.
When you set heap size properties on a node, use the "m" suffix to indicate
megabytes, as the following example shows:
bin_setenv_min_mem=4500m
bin_setenv_max_mem=4500m
bin_setenv_meta_space_size=1024m
After setting the values in the properties file, restart the component:
For example:
For Java settings not controlled by the properties listed above, you can also pass in
additional JVM flags or values for any Edge component. The *.properties files
will be read in by Bash and should be enclosed by '(single quotes) to preserve literal
characters or "(double quotes) if you require shell expansion.
For additional tips on configuring memory for Private Cloud components, see this
article on the Edge forums.
Endpoi https://monetization_server_IP/mint/config/properties/property_name?
nt value=property_value
Reque Notes about the request that you send:
st
● Body: None. The property name and value are query string
parameters. Do not set a value for the body of the request.
● HTTP method: POST
● Authentication: You can use OAuth2 or pass the username and
password in the request.
● You must URL encode your property value before sending the
request.
When you call the property change API, the change applies to the server you called
only. It does not apply to other servers in the pod. You must call this API on all
servers on which you want to change the property.
Property Description
The mint.invalidTscStorage.setting
property determines whether Apigee Edge for
Private Cloud stores invalid TSC
transactions.
To change this
mint.invalidTscStorage.setting
property for all your rating servers, you must
send a similar API request to each server.
"https://monetization_server_IP:8080/v1/mint/properties/mint.invalidTscStorage.set
ting?value=discard"
Note: We recommend that you implement the steps described in this document in a test
environment and do suitable testing before deploying the changes to a production environment.
Overview
Traditionally, Apigee Edge for Private Cloud has provided optional encryption for key
value map (KVM) data and OAuth access tokens.
The following table describes the encryption options for data at rest in Apigee for
Private Cloud:
To enable encryption of client credentials, you need to perform the following tasks
on all message processor and management server nodes:
● Create a keystore to store a Key encryption key (KEK). Apigee uses this
encrypted key to encrypt the secret keys needed to encrypt your data.
● Edit configuration properties on all management server and message
processor nodes.
● Create a developer app to trigger key creation.
● Restart the nodes.
The steps in this document explain how to enable the KEK feature, which allows
Apigee to encrypt the secret keys used to encrypt developer app consumer secrets
when they are stored at rest in the Cassandra database.
By default, any existing values in the database will remain unchanged (in plain text)
and will continue to work as before.
Note: Apigee provides a "fallback" option that is disabled by default. This means that, by default,
unencrypted consumer secrets will fail. If the allowFallback flag is set to true, as explained
in the steps that follow, then unencrypted entities will succeed. For example, if allowFallback
is false, unencrypted developer app consumer secrets will no longer work, causing API key
verification to fail.
If you perform any write operation on an unencrypted entity, it will be encrypted when
the operation is saved. For example, if you revoke an unencrypted token and then
later approve it, the newly approved token will be encrypted.
Be sure to store a copy of the keystore in which the KEK is stored in a safe location.
We recommend using your own secure mechanism to save a copy of the keystore.
As the instructions in this document explain, a keystore must be placed on each
message processor and management server node where the local configuration file
can reference it. But it is also important to store a copy of the keystore somewhere
else for safekeeping and as a backup.
Prerequisites
You must meet these requirements before performing the steps in this document:
● You must install or upgrade to Apigee Edge for Private Cloud 4.50.00.10 or
later.
● You must be an Apigee Edge for Private Cloud administrator.
Follow these steps to create a keystore to hold the key encryption key (KEK):
Important: It is your responsibility to keep the keystore safe. See Keeping keys safe.
1. Execute the following command to generate a keystore to store a key that will
be used to encrypt the KEK. Enter the command exactly as shown. (You can
provide any keystore name you wish):
Next, configure the management server. If you have management servers installed
on multiple nodes, you must repeat these steps on each node.
conf_keymanagement_kmscred.encryption.enabled=true
conf_keymanagement_kmscred.encryption.keystore.path=PATH_TO_KEYSTORE_FIL
E
conf_keymanagement_kmscred.encryption.kek.alias=KEYSTORE_NAME
conf_keymanagement_kmscred.encryption.keystore.pass=KEYSTORE_PASSWORD
7. conf_keymanagement_kmscred.encryption.kek.pass=KEK_PASSWORD
8. Note that the KEK_PASSWORD may be the same as the KEYSTORE_PASSWORD
depending on the tool used to generate the keystore.
9. Restart the management server by using the following commands:
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
11. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
wait_for_ready
12. The wait_for_ready command returns the following message when the
management server is ready:
13. Checking if management-server is up: management-server is up.
14. If you have management servers installed on multiple nodes, repeat steps 1-4
above on each management server node.
Now that the management servers are updated, you must create a developer app to
trigger generation of the key used to encrypt the client credentials data:
1. Create a Developer app to trigger the creation of a data encryption key (KEK).
For steps, see Registering an app.
2. Delete the developer app if you wish. You do not need to keep it around once
the encryption key is generated.
Until encryption is enabled in the message processors, runtime requests will not be
able to process any encrypted credentials.
1. Copy the keystore file you generated in Step 1 to a directory on the message
processor node, such as /opt/apigee/customer/application. For
example:
2. cp certs/kekstore.p12 /opt/apigee/customer/application
3. Ensure the file is readable by the apigee user:
4. chown apigee:apigee /opt/apigee/customer/application/kekstore.p12
5. Add the following properties to
/opt/apigee/customer/application/message-
processor.properties. If the file does not exist, create it. See also
Property file reference.
conf_keymanagement_kmscred.encryption.enabled=true
conf_keymanagement_kmscred.encryption.keystore.path=PATH_TO_KEYSTORE_FIL
E
conf_keymanagement_kmscred.encryption.kek.alias=KEYSTORE_NAME
# These could alternately be set as environment variables. These variables should be
# accessible to Apigee user during bootup of the Java process. If environment
# variables are specified, you can skip the password configs below.
# KMSCRED_ENCRYPTION_KEYSTORE_PASS=
# KMSCRED_ENCRYPTION_KEK_PASS=
See also Using environment variables for configuration properties.
conf_keymanagement_kmscred.encryption.keystore.pass=KEYSTORE_PASSWORD
6. conf_keymanagement_kmscred.encryption.kek.pass=KEK_PASSWORD
7. Note that the KEK_PASSWORD may be the same as the KEYSTORE_PASSWORD
depending on the tool used to generate the keystore.
8. Restart the message processor by using the following commands:
9. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
wait_for_ready
11. The wait_for_ready command returns the following message when the
message processor is ready to process messages:
12. Checking if message-processor is up: message-processor is up.
13. If you have message processors installed on multiple nodes, repeat steps 1-4
on each message processor node.
Summary
Any developer apps that you create from now on will have their credential secret
encrypted at rest in the Cassandra database.
You can alternatively set the following message processor and management server
configuration properties using environment variables. If set, the environment
variables override the properties set in the message processor or management
server configuration file.
conf_keymanagement_kmscred.encryption.keystore.pass=
conf_keymanagement_kmscred.encryption.kek.pass=
export KMSCRED_ENCRYPTION_KEK_PASS=KEK_PASSWORD
If you set these environment variables, you can omit these configuration properties
from the configuration files on the message processor and management server
nodes, as they will be ignored:
conf_keymanagement_kmscred.encryption.keystore.pass
conf_keymanagement_kmscred.encryption.kek.pass
This section describes the configuration properties that you must set on all message
processor and management server nodes, as explained previously in this document.
IDP authentication
Recom
UI Option Description mende
d?
Edge UI Configure
GET Apigee Edge
STARTED! for Private
Cloud to use
an external
LDAP or
SAML IDP for
accessing the
management
API and the
Edge UI.
Classic UI Configure an internal or external
IDP to access the management
GET STARTED! API and the Classic UI.
NOTE: SAML and LDAP are supported as authentication mechanisms only. They are not
supported for authorization. Therefore, you still use the Edge OpenLDAP database to maintain
authorization information.
Both SAML and LDAP IDPs support a single sign-on (SSO) environment. By using an
external IDP with Edge, you can support SSO for the Edge UI and API in addition to
any other services that you provide and that also support your external IDP.
The instructions in this section to enable external IDP support is different from the
External authentication in the following ways:
To support SAML or LDAP on Edge, you install apigee-sso, the Apigee SSO
module. The following image shows Apigee SSO in an Edge for Private Cloud
installation:
You can install the Apigee SSO module on the same node as the Edge UI and
Management Server, or on its own node. Ensure that Apigee SSO has access to the
Management Server over port 8080.
Port 9099 has to be open on the Apigee SSO node to support access to Apigee SSO
from a browser, from the external SAML or LDAP IDP, and from the Management
Server and Edge UI. As part of configuring Apigee SSO, you can specify that the
external connection uses HTTP or the encrypted HTTPS protocol.
Apigee SSO uses a Postgres database accessible on port 5432 on the Postgres
node. Typically you can use the same Postgres server that you installed with Edge,
either a standalone Postgres server or two Postgres servers configured in
master/standby mode. If the load on your Postgres server is high, you can also
choose to create a separate Postgres node just for Apigee SSO.
About SAML
With SAML enabled, access to the Edge UI and Edge management API uses OAuth2
access tokens. These tokens are generated by the Apigee SSO module which
accepts SAML assertions returned by the your IDP.
Once generated from a SAML assertion, the OAuth token is valid for 30 minutes and
the refresh token is valid for 24 hours. Your development environment might support
automation for common development tasks, such as test automation or Continuous
Integration/Continuous Deployment (CI/CD), that require tokens with a longer
duration. See Using SAML with automated tasks for information on creating special
tokens for automated tasks.
About LDAP
LDAP authentication within Apigee SSO uses the Spring Security LDAP module. As a
result, the authentication methods and configuration options for Apigee SSO’s LDAP
support are directly correlated to those found in Spring Security LDAP.
LDAP with Edge for the Private Cloud supports the following authentication methods
against an LDAP-compatible server:
NOTE: Edge for the Private Cloud’s LDAP authentication does not support the Search and
Compare authentication mechanism.
Apigee SSO attempts to retrieve the user's email address and update its internal user
record with it so that there is a current email address on file as Edge uses this email
for authorization purposes.
Edge UI and API URLs
The URL that you use to access the Edge UI and Edge management API is the same
as used before you enabled SAML or LDAP. For the Edge UI:
http://edge_UI_IP_DNS:9000
https://edge_UI_IP_DNS:9000
Where edge_UI_IP_DNS is the IP address or DNS name of the machine hosting the
Edge UI. As part of configuring the Edge UI, you can specify that the connection use
HTTP or the encrypted HTTPS protocol.
http://ms_IP_DNS:8080/v1
https://ms_IP_DNS:8080/v1
Where ms_IP_DNS is the IP address or DNS name of the Management Server. As part
of configuring the API, you can specify that the connection use HTTP or the
encrypted HTTPS protocol.
By default, the connection to Apigee SSO uses HTTP over port 9099 on the node
hosting apigee-sso, the Apigee SSO module. Built into apigee-sso is a Tomcat
instance that handles the HTTP and HTTPS requests.
As part of configuring the portal, you must specify the URL of the Apigee SSO
module that you installed with Edge:
The process for installing and configuring external IDP support on Apigee Edge for
Private Cloud requires that you perform some tasks on your IDP and some on Edge.
The general process is:
In addition, the following other tasks are also optional, depending on your
environment:
When SAML is enabled, the principal (an Edge UI user) requests access to the
service provider (Apigee SSO). Apigee SSO (in it's role as a SAML service provider)
then requests and obtains an identity assertion from the SAML IDP and uses that
assertion to create the OAuth2 token required to access the Edge UI. The user is then
redirected to the Edge UI.
1. The user attempts to access the Edge UI by making a request to the login URL
for the Edge UI. For example: https://edge_ui_IP_DNS:9000
2. Unauthenticated requests are redirected to the SAML IDP. For example,
"https://idp.customer.com".
3. If the user is not logged in to the identity provider, then they are prompted to
log in.
4. The user logs in.
5. The user is authenticated by the SAML IDP, which generates a SAML 2.0
assertion and returns it to the Apigee SSO.
6. Apigee SSO validates the assertion, extracts the user identity from the
assertion, generates the OAuth 2 authentication token for the Edge UI, and
redirects the user to the main Edge UI page at:
7. https://edge_ui_IP_DNS:9000/platform/orgName
8. Where orgName is the name of an Edge organization.
Edge supports many IDPs, including Okta and the Microsoft Active Directory
Federation Services (ADFS). For information on configuring ADFS for use with Edge,
see Configuring Edge as a Relying Party in ADFS IDP. For Okta, see the following
section.
To configure your SAML IDP, Edge requires an email address to identify the user.
Therefore, the identity provider must return an email address as part of the identity
assertion.
Setting Description
Metadata The SAML IDP might require the metadata URL of Apigee SSO. The
URL metadata URL is in the form:
protocol://apigee_sso_IP_DNS:port/saml/metadata
For example:
http://apigee_sso_IP_or_DNS:9099/saml/metadata
Assertion Can be used as the redirect URL back to Edge after the user enters
Consume their IDP credentials, in the form:
r Service
protocol://apigee_sso_IP_DNS:port/saml/SSO/alias/apigee-saml-login-
URL
opdk
For example:
http://apigee_sso_IP_or_DNS:9099/saml/SSO/alias/apigee-saml-login-
opdk
Single You can configure Apigee SSO to support single logout. See Configure
logout single sign-out from the Edge UI for more. The Apigee SSO single
URL logout URL has the form:
protocol://apigee_SSO_IP_DNS:port/saml/SingleLogout/alias/apigee-
saml-login-opdk
For example:
http://1.2.3.4:9099/saml/SingleLogout/alias/apigee-saml-login-opdk
The SP For Apigee SSO:
entity ID
apigee-saml-login-opdk
(or
Note: See also the following Community article: Can I use an arbitrary string as
Audience
SP entity ID while configuring Apigee Edge SSO?.
URI)
Configuring Okta
To configure Okta:
1. Log in to Okta.
2. Select Applications and then select your SAML application.
3. Select the Assignments tab to add any users to the application. These user
will be able to log in to the Edge UI and make Edge API calls. However, you
must first add each user to an Edge organization and specify the user's role.
See Register new Edge users for more.
4. Select the Sign on tab to obtain the Identity Provider metadata URL. Store that
URL because you need it to configure Edge.
5. Select the General tab to configure the Okta application, as shown in the table
below:
Setting Description
Single Specifies the redirect URL back to Edge for use after the user
sign enters their Okta credentials. This URL is in the form:
on
URL http://apigee_sso_IP_DNS:9099/saml/SSO/alias/apigee-saml-
login-opdk
https://apigee_sso_IP_DNS:9099/saml/SSO/alias/apigee-saml-
login-opdk
Note that this URL is case sensitive and SSO must appear in
caps.
If you have a load balancer in front to apigee-sso,then specify
the IP address or DNS name of apigee-sso as referenced
through the load balancer.
The SAML settings dialog box should appear as below once you are done:
1. The user inputs an RDN attribute and password. For example, they might input
the username “alice”.
2. Apigee SSO constructs the DN; for example:
3. dn=uid=alice,ou=users,dc=test,dc=com
4. Apigee SSO uses the statically constructed DN and supplied password to
attempt a bind to the LDAP server.
5. If successful, Apigee SSO returns an OAuth token that the client can attach to
their requests to the Edge services.
Simple binding provides the most secure installation because no LDAP credentials or
other data are exposed through configuration to Apigee SSO. The administrator can
configure one or more DN patterns in Apigee SSO to be tried for a single username
input.
1. The user inputs an RDN, such as a username or email address, plus their
password.
2. Apigee SSO performs a search using an LDAP filter and a set of known search
credentials.
3. If there is exactly one match, Apigee SSO retrieves the user's DN. If there are
zero or more than one matches, Apigee SSO rejects the user.
4. Apigee SSO then attempts to bind the user's DN and supplied password
against the LDAP server.
5. The LDAP server performs the authentication.
6. If successful, Apigee SSO returns an OAuth token that the client can attach to
their requests to the Edge services.
Apigee recommends that you use a set of read-only admin credentials that you make
available to Apigee SSO to perform a search on the LDAP tree where the user
resides.
NOTE: By default, the Apigee SSO module is accessible over HTTP on port 9099 of the node on
which it is installed. You can enable TLS on the Apigee SSO module. To do so, you have to create
a third set of TLS keys and certificates used by Tomcat to support TLS. See Configure Apigee
SSO for HTTPS access for more.
To create the key and self-signed cert, with no passphrase, for communicating with
the IDP:
1. As a sudo user, make a new directory:
2. sudo mkdir -p /opt/apigee/customer/application/apigee-sso/idp/
3. Change to the new directory:
4. cd /opt/apigee/customer/application/apigee-sso/idp/
5. Generate your private key with a passphrase:
6. sudo openssl genrsa -aes256 -out server.key 1024
7. Remove the passphrase from the key:
8. sudo openssl rsa -in server.key -out server.key
9. Generate certificate signing request for CA:
10. sudo openssl req -x509 -sha256 -new -key server.key -out server.csr
11. Generate self-signed certificate with 365 days expiry-time:
12. sudo openssl x509 -sha256 -days 365 -in server.csr -signkey server.key -out
selfsigned.crt
13. Change the owner of the key and crt file to the "apigee" owner:
sudo chown apigee:apigee server.key
IP1=hostname_or_IP_of_apigee_SSO
IP2=hostname_or_IP_of_apigee_SSO
## Postgres configuration.
# Postgres IP address and port
PG_HOST=$IP1
PG_PORT=5432
# Postgres username and password as set when you installed Edge.
# If these credentials change, they must be updated and setup rerun.
PG_USER=apigee
PG_PWD=postgres
# Default port is 9099. If changing, set both properties to the same value.
SSO_PUBLIC_URL_PORT=9099
SSO_TOMCAT_PORT=9099
# Uncomment the following line to set specific SSL cipher using OpenSSL syntax
# SSL_CIPHERS=HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!kRSA
# Set Tomcat TLS mode to DEFAULT to use HTTP access to apigee-sso.
SSO_TOMCAT_PROFILE=DEFAULT
SSO_PUBLIC_URL_SCHEME=http
# Path to signing key and secret from Create the TLS keys and certificates above.
SSO_JWT_SIGNING_KEY_FILEPATH=/opt/apigee/customer/application/apigee-
sso/jwt-keys/privkey.pem
SSO_JWT_VERIFICATION_KEY_FILEPATH=/opt/apigee/customer/application/
apigee-sso/jwt-keys/pubkey.pem
###########################################################
# Define External IDP #
# Use one of the following configuration blocks to #
# define your IDP settings: #
# - SAML configuration properties #
# - LDAP Direct Binding configuration properties #
# - LDAP Indirect Binding configuration properties #
###########################################################
# Configure an SMTP server so that the Apigee SSO module can send emails to
users
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
# omit for no username
SMTPPASSWORD=smtppwd
# omit for no password
SMTPSSL=n
SMTPPORT=25
# The address from which emails are sent
SMTPMAILFROM="My Company <myco@company.com>"
If you are using SAML for your IDP, use the following block of configuration
properties in your configuration file (defined above):
# SAML service provider key and cert from Create the TLS keys and certificates
above.
SSO_SAML_SERVICE_PROVIDER_KEY=/opt/apigee/customer/application/apigee-
sso/saml/server.key
SSO_SAML_SERVICE_PROVIDER_CERTIFICATE=/opt/apigee/customer/
application/apigee-sso/saml/selfsigned.crt
# The passphrase used when you created the SAML cert and key.
# The section "Create the TLS keys and certificates" above removes the passphrase,
# but this property is available if you require a passphrase.
# SSO_SAML_SERVICE_PROVIDER_PASSWORD=samlSP123
If you are using LDAP direct binding for your IDP, use the following block of
configuration properties in you configuration file, as shown in the example above:
# Attribute name used by the LDAP server to refer to the user's email address; for
example, "mail"
SSO_LDAP_MAIL_ATTRIBUTE=LDAP_email_attribute
If you are using LDAP indirect binding for your IDP, use the following block of
configuration properties in your configuration file, as shown in the example above:
# Attribute name used by the LDAP server to refer to the user's email address; for
example, "mail"
SSO_LDAP_MAIL_ATTRIBUTE=LDAP_email_attribute
1. Log in to the Management Server node. That node should already have
apigee-service installed, as described in Install the Edge apigee-setup
utility.
Alternatively, you can install the Apigee SSO module on a different node.
However, that node must be able to access the Management Server over port
8080.
2. Install and configure apigee-sso by executing the following command:
3. /opt/apigee/apigee-setup/bin/setup.sh -p sso -f configFile
4. Where configFile is the configuration file that you defined above.
5. Install the apigee-ssoadminapi.sh utility used to manage admin and
machine users for the apigee-sso module:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-ssoadminapi install
7. Log out of the shell, and then log back in again to add the apigee-
ssoadminapi.sh utility to your path.
1. Copy the contents of the metadata XML from your IDP to a file on the Apigee
SSO node. For example, copy the XML to:
2. /opt/apigee/customer/application/apigee-sso/saml/metadata.xml
3. NOTE: The file must be named metadata.xml.
4. Change ownership of the XML file to the "apigee" user:
5. chown apigee:apigee
/opt/apigee/customer/application/apigee-sso/saml/metadata.xml
6. Set the value of SSO_SAML_IDP_METADATA_URL to the absolute file path:
7. SSO_SAML_IDP_METADATA_URL=file:///opt/apigee/customer/application/
apigee-sso/saml/metadata.xml
8. You must prefix the file path with "file://", followed by the absolute path
from root (/).
NOTE: This section applies to using an external IDP with the new Edge UI. It is not intended for
users of the Classic UI.
After installing the Apigee SSO module, you must configure the New Edge UI to
support your external IDP.
Note: Installing SSO does not depend on whether you are using the Classic UI or the New UI.
However, the New Edge UI requires that SSO be installed.
After you enable an external IDP on the Edge UI, the Edge UI uses OAuth to connect
to the Management Server. You do not have to worry about expiring OAuth tokens
between the Edge UI and the Edge Management Server. Expired tokens are
automatically refreshed.
NOTE: Automatic refresh of tokens only occurs for the token use by the Edge UI to make calls to
the Management Server. Tokens generated for users when they log in to the Edge UI expire in
one hour and the refresh token expires in 12 hours.
IP1=hostname_or_IP_of_apigee_SSO
# Comma separated list of URLs for the Edge UI,
# in the format: http_or_https://IP_or_hostname_of_UI:3001.
# You can have multiple URLs when you have multiple installations
# of the Edge UI or you have multiple data centers.
EDGEUI_PUBLIC_URIS=http_or_https://IP_or_hostname_of_UI:3001
# Required variables
# Default is "n" to disable external IDP support.
EDGEUI_SSO_ENABLED=y
# If set, the existing EDGEUI client is deleted and new one is created.
# The default value is "n".
# Set to "y" when you configure your external IDP and change the value of
# any of the EDGEUI_* properties.
EDGEUI_SSO_CLIENT_OVERWRITE=y
To configure the Edge UI to enable support for your external IDP:
To later change these values, update the configuration file and execute the
command again.
SSO_TOMCAT_PROFILE=DEFAULT
Enabling HTTPS access to apigee-sso using either mode disables HTTP access.
That is, you cannot access apigee-sso using both HTTP and HTTPS concurrently.
NOTE: If you configure HTTPS access to apigee-sso, ensure that you update any URLs used by
the Edge UI and by your IDP to use HTTPS. Also, if you choose to enable HTTPS on a port other
than 9099, make sure you update the UI and IDP to use the correct port number.
In this configuration, you trust the connection from the load balancer to the Apigee
SSO module, so there is no need to use TLS for that connection. However, external
entities, such as the IDP, now must access the Apigee SSO module on port 443, not
on the unprotected port of 9099.
The reason to configure the Apigee SSO module in SSL_PROXY mode is that the
Apigee SSO module auto-generates redirect URLs used externally by the IDP as part
of the authentication process. Therefore, these redirect URLs must contain the
external port number on the load balancer, 443 in this example, and not the internal
port on the Apigee SSO module, 9099.
NOTE: You do not have to create a TLS cert and key for SSL_PROXY mode because the
connection from the load balancer to the Apigee SSO module uses HTTP.
# Specify the port number on the load balancer for terminating TLS.
# This port number is necessary for apigee-sso to auto-generate redirect URLs.
SSO_TOMCAT_PROXY_PORT=443
SSO_PUBLIC_URL_PORT=443
2. SSO_PUBLIC_URL_SCHEME=https
3. Configure the Apigee SSO module:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-sso setup -f configFile
5. Update your IDP configuration to now make an HTTPS request on port 443 of
the load balancer to access Apigee SSO. For more information, see one of the
following:
● Configure your SAML IDP
● Configure your LDAP IDP
6. Update your Edge UI configuration for HTTPS by setting the following
properties in the config file:
SSO_PUBLIC_URL_PORT=443
7. SSO_PUBLIC_URL_SCHEME=https
8. Then update the Edge UI:
9. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-sso -f configFile
10. Use the edge-ui component for the Classic UI.
11. If you installed the Apigee Developer Services portal (or simply, the portal),
update it to use HTTPS to access Apigee SSO. For more information, see
Configure the portal to use external IDPs
● Generate a TLS cert and key and store them in a keystore file. You cannot use
a self-signed certificate. You must generate a cert from a CA.
● Update the configuration settings for apigee-sso.
SSO_TOMCAT_KEYSTORE_ALIAS=sso
7. SSO_PUBLIC_URL_SCHEME=https
8. If you installed the Developer Services portal, update it to use HTTPS to
access Apigee SSO. For more, see Configure the portal to use external IDPs.
You might have a load balancer in front of the Apigee SSO module that terminates
TLS on the load balancer but also enables TLS between the load balancer and
Apigee SSO. In the figure above for SSL_PROXY mode, this means the connection
from the load balancer to Apigee SSO uses TLS.
In this scenario, you configure TLS on Apigee SSO just as you did above for
SSL_TERMINATION mode. However, if the load balancer uses a different TLS port
number than Apigee SSO uses for TLS, then you must also specify the
SSO_TOMCAT_PROXY_PORT property in the config file. For example:
# Specify the port number on the load balancer for terminating TLS.
# This port number is necessary for apigee-sso to generate redirect URLs.
SSO_TOMCAT_PROXY_PORT=443
SSO_PUBLIC_URL_PORT=443
The configure the IDP and Edge UI to make HTTPS requests on port 443.
● Make sure you have added all Edge users, including system administrators, to
your IDP.
● Make sure you have thoroughly tested your IDP authentication on the Edge UI
and Edge management API.
● If you are using the Apigee Developer Services portal (or simply, the portal),
configure and test your external IDP on the portal to ensure that the portal can
connect to Edge. See Configuring the portal for external IDPs.
If you have not yet configured an external IDP, the response is as shown below,
meaning Basic authentication is enabled:
If you have already enabled an external IDP, the <ssoserver> element should
appear in the response:
Notice that the version with an external IDP enabled also shows
<BasicAuthEnabled>true</BasicAuthEnabled>, which means that Basic
authentication is still enabled.
CAUTION: You can't revert or undo this action. After you disable Basic authentication, to enable
authentication again, see Re-enable Basic authentication.
To disable basic authentication, you pass the XML object returned in the previous
section as the payload. The only difference is that you set <BasicAuthEnabled>
to false, as the following example shows:
After you disable Basic authentication, any Edge management API call that passes
Basic authentication credentials returns the following error:
<Error>
<Code>security.SecurityProfileBasicAuthDisabled</Code>
<Message>Basic Authentication scheme not allowed</Message>
<Contexts/>
</Error>
CAUTION: As part of re-enabling Basic authentication, you must temporarily disable all
authentication on Edge, including an external IDP authentication.
#! /bin/bash
/opt/apigee/apigee-zookeeper/bin/zkCli.sh -server localhost:2181 <<EOF
set /system/securityprofile <SecurityProfile></SecurityProfile>
quit
3. EOF
4. You will see output in the form:
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled
WATCHER::
WatchedEvent state:SyncConnected
type:None path:null
[zk: localhost:2181(CONNECTED) 0]
set /system/securityprofile <SecurityProfile></SecurityProfile>
cZxid = 0x89
...
[zk: localhost:2181(CONNECTED) 1] quit
5. Quitting...
6. Re-enable Basic authentication and the external IDP authentication:
curl -H "Content-Type: application/xml" http://localhost:8080/v1/securityprofile
-u sysAdminEmail:pWord -d '<SecurityProfile enabled="true" name="securityprofile">
<UserAccessControl enabled="true">
<SSOServer>
<BasicAuthEnabled>true</BasicAuthEnabled>
<PublicKeyEndPoint>/token_key</PublicKeyEndPoint>
<ServerUrl>http://35.197.37.220:9099</ServerUrl>
</SSOServer>
</UserAccessControl>
7. </SecurityProfile>'
You can now use Basic authentication again. Note that Basic authentication does
not work when the New Edge experience is enabled.
The portal is always associated with a single Edge organization. When you configure
the portal, you can specify the basic authentication credentials (username and
password) for an account in the organization that the portal uses to communicate
with Edge.
If you choose to enable an external IDP such as SAML or LDAP for Edge
authentication, you can then configure the portal to use that authentication when
making requests to Edge. Configuring the portal to use an external IDP automatically
creates a new machine user account in the Edge organization that the portal then
uses to make requests to Edge. For more on machine users, see Automate tasks for
external IDPs.
External IDP support for the portal requires that you have already installed and
configures the Apigee SSO module on the Edge Management Server node. The
general process for enabling an external IDP for the portal is as follows:
1. Install the Apigee SSO module, as described in Install Apigee SSO for high
availability.NOTE: Basic authentication must still be enabled on Edge to install the
portal. Do not disable Basic authentication on Edge until after you have configured and
tested the portal with an external IDP.
2. Install the portal and ensure that your installation is working properly. See
Install the portal.
3. Configure SAML or LDAP on the portal, as described in this section.
4. (Optional) Disable Basic authentication on Edge, as described in Disable Basic
authentication on Edge.
When an external IDP is enabled, Edge supports automated OAuth2 token generation
through the use of machine users. A machine user can obtain OAuth2 tokens without
having to specify a passcode. That means you can completely automate the process
of obtaining and refreshing OAuth2 tokens.
The IDP configuration process for the portal automatically creates a machine user in
the organization associated with the portal. The portal then uses this machine user
account to connect to Edge. For more on machine users, see Automate tasks for
external IDPs.
When you configure the portal to use an external IDP, you enable the portal to use
either SAML or LDAP to authenticate with Edge so that the portal can make requests
to Edge. However, the portal also supports a type of user called developers.
Developers make up the community of users that build apps by using your APIs. App
developers use the portal to learn about your APIs, to register apps that use your
APIs, to interact with the developer community, and to view statistical information
about their app usage on a dashboard.
When a developer logs in to the portal, it is the portal that is responsible for
authenticating the developer and for enforcing role-based permissions. The portal
continues to use Basic authentication with developers even after you enable IDP
authentication between the portal and Edge. For more information, see
Communicating between the portal and Edge.
To configure an external IDP for the portal, you must create a configuration file that
defines the portal's settings.
The following example shows a portal configuration file with IDP support:
# IP address of Edge Management Server and the node on which the Apigee SSO
module is installed.
IP1=22.222.22.222
MGMT_URL=http://$IP1:8080/v1
EDGE_ORG=myorg
SSO_PUBLIC_URL_HOSTNAME=$IP1
SSO_PUBLIC_URL_PORT=9099
SSO_PUBLIC_URL_SCHEME=http
SSO_ADMIN_NAME=ssoadmin
SSO_ADMIN_SECRET=Secret123
DEVPORTAL_SSO_ENABLED=y
PORTALCLI_SSO_CLIENT_NAME=portalcli
PORTALCLI_SSO_CLIENT_SECRET=Abcdefg@1
# Email address and user info for the machine user created in the Edge org specified
# above by EDGE_ORG.
# Add this email as an org admin before configuring the portal to use an external
IDP.
DEVPORTAL_ADMIN_EMAIL=DevPortal_SAML@google.com
DEVPORTAL_ADMIN_FIRSTNAME=DevPortal
DEVPORTAL_ADMIN_LASTNAME=SAMLAdmin
DEVPORTAL_ADMIN_PWD=Abcdefg@1
# If set, the existing portal OAuth client is deleted and a new one is created.
# Set to "y" when you configure the external IDP and change the value of
PORTALCLI_SSO_CLIENT_OVERWRITE=y
To later change these values, update the configuration file and execute this
procedure again.
CAUTION: If you disable your external IDP, you should then reconfigure the portal to either use
SAML or LDAP or, if Edge is still configured to support Basic authentication, configure the portal
to communicate with Edge using Basic authentication.
For more information on using Basic authentication, see Communicating between the portal and
Edge.
1. Open the configuration file that you used previously to enable the external IDP.
2. Set the value of the DEVPORTAL_SSO_ENABLED property to n, as the
following example shows:
3. DEVPORTAL_SSO_ENABLED=n
4. Configure the portal by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-drupal-devportal
configure-sso -f configFile
If you are using HTTP access to Apigee SSO, then configure the load balancer to:
For example, on each Apigee SSO instance you set the port to 9033 by adding the
following to the config file:
SSO_TOMCAT_PORT=9033
You then configure the load balancer to listen on port 9033 and forwarding requests
to an Edge SSO instance on port 9033. The public URL of Apigee SSO in this scenario
is:
http://LB_DNS_NAME:9033
You can configure the Apigee SSO instances to use HTTPS. In this scenario, follow
the steps in Configure Apigee SSO for HTTPS access. As part of the process of
enabling HTTPS, you set SSO_TOMCAT_PROFILE in the Apigee SSO config file as
shown below:
SSO_TOMCAT_PROFILE=SSL_TERMINATION
You can also optionally set the port used by Apigee SSO for HTTPS access:
SSO_TOMCAT_PORT=9443
Then configure the load balancer to:
You then configure the load balancer to forward requests to an Apigee SSO instance
on port 9433. The public URL of Apigee SSO in this scenario is:
https://LB_DNS_NAME:9443
Before you install Apigee SSO in two data centers, you need the following:
When you install Apigee SSO in each data center, you configure both to use the
Postgres Master in data center 1:
## Postgres configuration
PG_HOST=IP_or_DNS_of_PG_Master_in_DC1
PG_PORT=5432
You also configure both data centers to use the DNS entry as the publicly accessible
URL:
5. PG_HOST=IP_or_DNS_of_PG_Master_in_DC2
6. Restart Apigee SSO in data center 2 to update its configuration:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-sso restart
Edge supports automated token generation through the use of machine users within
organizations that have an IDP enabled. A machine user can obtain OAuth2 tokens
without having to specify a passcode. That means you can completely automate the
process of obtaining and refreshing OAuth2 tokens by using the Edge management
API.
There are two ways to create a machine user for an IDP-enabled organization:
● Using apigee-ssoadminapi.sh
● Using the management API
You cannot create a machine user for organizations that have no enabled an external
IDP.
NOTE You can only create machine users for organizations that use an external IDP such as
LDAP or SAML. You cannot create machine users for organizations that are not IDP-enabled.
The machine user is created and stored in the Edge datastore, not in your IDP.
Therefore, you are not responsible for maintaining the machine user by using the
Edge UI and Edge management API.
When you create the machine user, you must specify an email address and
password. After creating the machine user, you assign it to one or more
organizations.
2. -u machine_user_email -p machine_user_password
3. QUESTION/TBD: Does apigee-ssoadminapi.sh also take "ldap" as an argument?
Where:
● SSO_ADMIN_NAME is the admin username defined by the
SSO_ADMIN_NAME property in the configuration file used to configure
the Apigee SSO module. The default is ssoadmin.
● SSO_ADMIN_SECRET is the admin password as specified by the
SSO_ADMIN_SECRET property in the config file.
In this example, you can omit the values for --port and --ssl
because the apigee-sso module uses the default values of 9099 for
--port and http for --ssl. If your installation does not use these
defaults, specify them as appropriate.
4. Log in to the Edge UI and add the machine user's email to your organizations,
and assign the machine user to the necessary role. See Adding global users
for more.
NOTE You can only create machine users for organizations that use an external IDP. You cannot
create machine users for organizations that do not use an external IDP.
2. --data-urlencode "client_id=ssoadmin"
3. Where SSO_ADMIN_SECRET is the admin password you set when you
installed apigee-sso as specified by the SSO_ADMIN_SECRET property in
the config file.
This command displays a token that you need to make the next call.
4. Use the following curl command to create the machine user, passing the
token that you received in the previous step:
curl "http://edge_sso_IP_DNS:9099/Users" -i -X POST \
-H "Accept: application/json" -H "Content-Type: application/json" \
-d '{"userName" : "machine_user_email", "name" :
{"formatted":"DevOps", "familyName" : "last_name", "givenName" :
"first_name"}, "emails" : [ {"value" :
"machine_user_email", "primary" : true } ], "active" : true,
"verified" : true, "password" : "machine_user_password" }' \
1. Use the following API call to generate the initial access and refresh tokens:
5. http://MS_IP_DNS:8080/v1/organizations/org_name
6. Where org_name is the name of the organization containing the machine user.
7. To later refresh the access token, use the following call that includes the
refresh token:
curl -H "Content-Type:application/x-www-form-urlencoded;charset=utf-8" \
-H "Accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
http://edge_sso_IP_DNS:9099/oauth/token \
8. -d 'grant_type=refresh_token&refresh_token=refreshToken'
9. NOTE: For authorization, pass the reserved OAuth2 client credential, Basic
ZWRnZWNsaTplZGdlY2xpc2VjcmV0, in the Authorization header.
When SAML is enabled, the principal (an Edge UI user) requests access to the
service provider (Apigee SSO). Apigee SSO (in it's role as a SAML service provider)
then requests and obtains an identity assertion from the SAML IDP and uses that
assertion to create the OAuth2 token required to access the Edge UI. The user is then
redirected to the Edge UI.
1. The user attempts to access the Edge UI by making a request to the login URL
for the Edge UI. For example: https://edge_UI_IP_DNS:9000
2. Unauthenticated requests are redirected to the SAML IDP. For example,
"https://idp.customer.com".
3. If you are not logged in to the IDP, you are prompted to log in.
4. You are authenticated by the SAML IDP.
The SAML IDP generates and returns a SAML 2.0 assertion to the Apigee SSO
module.
5. Apigee SSO validates the assertion, extracts the user identity from the
assertion, generates the OAuth 2 authentication token for the Edge UI, and
redirects the user to the main Edge UI page at the following URL:
6. https://edge_ui_IP_DNS:9000/platform/orgName
7. Where orgName is the name of an Edge organization.
However, you might want a logout from any one service to sign the user out of all
services. In this case, you can configure your IDP to support single sign-out.
NOTE: You do not have to make configuration changes to the Edge UI to enable single sign-out.
You configure the IDP to sign out the user when they log out of any service. Therefore, this steps
to enable single sign-out are specific to your IDP.
To configure the IDP, you need the following information about the Edge UI:
● The single logout URL for the Edge UI: This URL is in the following form:
● http://apigee_sso_IP_DNS:9099/saml/SingleLogout/alias/apigee-saml-login-
opdk
● Or, if you enabled TLS on apigee-sso:
● https://apigee_sso_IP_DNS:9099/saml/SingleLogout/alias/apigee-saml-login-
opdk
● The service provider issuer: The value for the Edge UI is apigee-saml-
login-opdk.
● The IDP certificate: In Install and configure Apigee SSO, you created an IDP
certificate named selfsigned.crt and saved it in
/opt/apigee/customer/application/apigee-sso/idp/. Depending
on your IDP, you must use that same certificate to configure single sign-out.
For example, if you are using OKTA as your service provider, in the SAML settings for
your application:
When a user next logs out of the Edge UI, the user is logged out of all services.
Using Edge admin utilities and APIs
after enabling SAML
This section describes how to run Edge system admin tools and commands after
enabling SAML. Many tasks on Edge require system administration credentials, such
as:
However, after you enable SAML on Edge you typically disable Basic Auth so that the
only way to authenticate is through the SAML IDP. Therefore, you must make sure
that you have added the system admin account to your SAML IDP.
Warning: Ensure that you have added the Edge system admin account to the SAML IDP before
you disable Basic Auth. Otherwise, you will not be able to authenticate as the system admin.
After you enable SAML authentication, you have several ways to pass the system
admin credentials to the apigee-adminapi.sh utility.
Note: The credentials must be for a sys admin user, not an organization administrator or other
type of user.
You can see all of the options for any apigee-adminapi.sh command, including
the options for specifying SAML credentials, by using the "-h" option to the
command. For example:
Where:
● The sso-url option specifies the URL of the Apigee SSO module. Modify the
port or protocol if you have changed them from 9099 and HTTP.
● oauth-flow specifies either passcode or password_grant. In this
example, you specify password_grant.
● adminEmail is the email address of the sys admin.
● oauth-password specifies the sys admin's password.
Where:
Where:
These utilities take as input a configuration file that specifies the system admin's
credentials by using the properties:
ADMIN_EMAIL="adminEmail"
APIGEE_ADMINPW=adminPWord
If you omit the password, then you are prompted for it.
After you enable SAML you use different properties to specify the sys admin's
credentials. For example, you can pass the system admin credentials:
ADMIN_EMAIL="adminEmail"
SSO_LOGIN_URL=http://edge_sso_IP_DNS:9099
OAUTH_FLOW=password_grant
OAUTH_ADMIN_PASSWORD=adminPWord
Where:
● SSO_LOGIN_URL specifies the URL of the Apigee SSO module. Modify the
port or protocol if you have changed them from 9099 and HTTP.
● OAUTH_FLOW specifies either passcode or password_grant. In this
example, you specify password_grant because you are passing the sys
admin's password.
● OAUTH_ADMIN_PASSWORD specifies the sys admin's password.
Alternatively, you can use the following properties to specify the credentials as part
of a passcode flow:
ADMIN_EMAIL="adminEmail"
SSO_LOGIN_URL=http://edge_sso_IP_DNS:9099
OAUTH_FLOW=passcode
OAUTH_ADMIN_PASSCODE=passcode
Where:
ADMIN_EMAIL="adminEmail"
SSO_LOGIN_URL=http://edge_sso_IP_DNS:9099
OAUTH_FLOW=passcode
OAUTH_BEARER_TOKEN=token
Where:
● The apigee-sso node must be able to access the Postgres node on port
5432.
● Port 9099 on apigee-sso node must be open for external HTTP access by
the Edge UI and the SAML IDP. If you configure TLS on apigee-sso, the port
number might be different.
● The apigee-sso node must be able to access the SAML IDP at the URL
specified by the SSO_SAML_IDP_METADATA_URL property.
● The apigee-sso node must be able to access port 8080 on the Management
Server node.
If all necessary ports are open and accessible, you can rerun the configuration steps:
● For apigee-sso:
● /opt/apigee/apigee-service/bin/apigee-service apigee-sso setup -f configFile
● For the Edge UI:
● /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-sso -f
configFile
If reconfiguration does work, then you can delete the Postgres database used by
apigee-sso, and then reconfigure apigee-sso and the Edge UI:
curl -u USER_NAME:PASSWORD
https://MS_IP_DNS:8080/v1/organizations/ORG_NAME
In this example, you use the curl -u option to pass Basic authentication
credentials. Alternatively, you can pass an OAuth2 token in the Bearer header to
make Edge management API calls, as the following example shows:
After you enable an external IDP for authentication, you can optionally disable Basic
authentication. If you disable Basic authentication, all scripts (such as Maven, shell,
and apigeetool) that rely on Edge management API calls supporting Basic
authentication no longer work. You must update any API calls and scripts that use
Basic authentication to pass OAuth2 access tokens in the Bearer header.
The get_token utility stores the tokens on disk, ready for use when required. It also
prints a valid access token to stdout. From there, you can use a browser extension
like Postman or embed it in an environment variable for use in curl.
21. https://MS_IP:8080/v1/organizations/ORG_NAME
22. After you get a new access token for the first time, you can get the access
token and pass it to an API call in a single command, as the following
example shows:
header=`get_token` && curl -H "Authorization: Bearer $header"
23. https://MS_IP:8080/v1/o/ORG_NAME
24. With this form of the command, if the access token has expired, it is
automatically refreshed until the refresh token expires.
(SAML only) After the refresh token expires, get_token prompts you for a new
passcode. You must navigate to the URL shown above in step 3 and generate a new
passcode before you can generate a new OAuth access token.
The only difference between the API calls documented in Using OAuth2 security with
the Apigee Edge management API is that the URL of the call must reference your
zone name. In addition, for generating the initial access token with a SAML IDP, you
must include the passcode, as shown in step 3 of the procedure above.
(LDAP) Use the following API call to generate the initial access and refresh tokens:
(SAML) Use the following API call to generate the initial access and refresh tokens:
Note that authentication with a SAML IDP requires a temporary passcode, whereas
an LDAP IDP does not.
To later refresh the access token, use the following call that includes the refresh
token:
curl -H "Content-Type:application/x-www-form-urlencoded;charset=utf-8" \
-H "Accept: application/json;charset=utf-8" \
-H "Authorization: Basic ZWRnZWNsaTplZGdlY2xpc2VjcmV0" -X POST \
https://EDGE_SSO_IP_DNS:9099/oauth/token \
-d 'grant_type=refresh_token&refresh_token=REFRESH_TOKEN'
Use apigee-ssoadminapi.sh
The Apigee SSO module supports two types of accounts:
● Administrators
● Machine users (for external IDP-enabled organizations only)
NOTE Machine users can be created only for organizations that use external IDPs. Machine
users cannot be created organizations that do not use external IDPs.
● Configuring the Apigee Developer Services portal (or simply, the portal) to
communicate with Edge
● When development environment support automation for common
development tasks, such as test automation or CI/CD.
Installing apigee-ssoadminapi.sh
Install the apigee-ssoadminapi.sh utility on the Edge Management Server node
where you installed the Apigee SSO module. Typically you install the apigee-
ssoadminapi.sh utility when you install the Apigee SSO module.
1. Log in to the Management Server node. That node should already have
apigee-service installed as described at Install the Edge apigee-setup
utility.
2. Install the apigee-ssoadminapi.sh utility used to manage admin and
machine users for the Apigee SSO module by executing the following
command:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-ssoadminapi install
4. Log out of the shell, and then log back in again to add the apigee-
ssoadminapi.sh utility to your path.
Returns:
admin list
--admin SSOADMIN_CLIENT_NAME Name of the client having administrative
privilege on sso
--secret SSOADMIN_CLIENT_SECRET Secret/Password for the client
--host SSO_HOST Hostname of SSO server to connect
--port SSO_PORT Port to use during request
--ssl SSO_URI_SCHEME Set to https, defaults to http
--debug Set in debug mode, turns on verbose in curl
-h Displays Help
For example, to specify all required information on the command line to see the list
of admin users:
Returns:
[
{
"client_id": "ssoadmin",
"access_token_validity": 300
}
]
If you omit any required information, such as the admin password, you are prompted.
In this example, you omit the values for --port and --ssl because the Apigee SSO
module uses the default values of 9099 for --port and http for --ssl. If your
installation does not use these defaults, specify them:
Alternatively, use the interactive form where you are prompted for all information:
This section shows you how to disable external IDP authentication for the Edge UI
and the management API.
1. Open for edit the configuration file that you used to configure the external IDP.
2. Set the EDGEUI_SSO_ENABLED property to n, as the following example
shows:
3. EDGEUI_SSO_ENABLED=n
4. Configure the Edge UI by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-sso -f
configFile
6. NOTE: Users who never set an Edge password must select the Reset password link on
the Edge UI login page to set a new password.
To disable external IDP authentication on the Edge management API:
Also note, however, that this release of Apigee Edge for Private Cloud (4.51.00) is the last release
in which Classic UI will be supported. In all future releases, only the Edge UI will be supported.
This section provides an overview of how external directory services integrate with
an existing Apigee Edge for Private Cloud installation. This feature is designed to
work with any directory service that supports LDAP, such as Active Directory,
OpenLDAP, and others.
Audience
This document assumes that you are an Apigee Edge for Private Cloud global
system administrator and that you have an account the external directory service.
Overview
By default, Apigee Edge uses an internal OpenLDAP instance to store credentials
that are used for user authentication. However, you can configure Edge to use an
external authentication LDAP service instead of the internal one. The procedure for
this external configuration is explained in this document.
About authentication
Users who access Apigee Edge either through the UI or APIs must be authenticated.
By default, Edge user credentials for authentication are stored in an internal
OpenLDAP instance. Typically, users must register or be asked to register for an
Apigee account, and at that time they supply their username, email address,
password credentials, and other metadata. This information is stored in and
managed by the authentication LDAP.
However, if you wish to use an external LDAP to manage user credentials on behalf
of Edge, you can do so by configuring Edge to use the external LDAP system instead
of the internal one. When an external LDAP is configured, user credentials are
validated against that external store, as explained in this document.
About authorization
The key credential used by the Edge authorization system is the user's email
address. This credential (along with some other metadata) is always stored in Edge's
internal authorization LDAP. This LDAP is entirely separate from the authentication
LDAP (whether internal or external).
Users who are authenticated through an external LDAP must also be manually
provisioned into the authorization LDAP system. Details are explained in this
document.
NOTE: User passwords from the external LDAP system are never stored/cached in the internal
authorization system.
For more background on authorization and RBAC, see Managing organization users
and Assigning roles.
For a deeper view, see also Understanding Edge authentication and authorization
flows.
Summary: Indirect binding authentication requires a search on the external LDAP for
credentials that match the email address, username, or other ID supplied by the user
at login. With direct binding authentication, no search is performed--credentials are
sent to and validated by the LDAP service directly. Direct binding authentication is
considered to be more efficient because there is no searching involved.
With indirect binding authentication, the user enters a credential, such as an email
address, username, or some other attribute, and Edge searches authentication
system for this credential/value. If the search result is successful, the system
extracts the LDAP DN from the search results and uses it with a provided password
to authenticate the user.
The key point to know is that indirect binding authentication requires the caller (e.g.,
Apigee Edge) to provide external LDAP admin credentials so that Edge can "log in" to
the external LDAP and perform the search. You must provide these credentials in an
Edge configuration file, which is described later in this document. Steps are also
described for encrypting the password credential.
With direct binding authentication, Edge sends credentials entered by a user directly
to the external authentication system. In this case, no search is performed on the
external system. Either the provided credentials succeed or they fail (e.g., if the user
is not present in the external LDAP or if the password is incorrect, the login will fail).
Direct binding authentication does not require you to configure admin credentials for
the external auth system in Apigee Edge (as with indirect binding authentication);
however, there is a simple configuration step that you must perform, which is
described in Configuring external authentication.
NOTE: These instructions apply to Microsoft Active Directory. For other LDAP services, please
consult their documentation.
Prerequisites
● You must have an Apigee Edge for Private Cloud 4.18.05 installation.
● You must have global system administrator credentials on Apigee Edge for
Private Cloud to perform this installation.
● You need to know the root directory of your Apigee Edge for Private Cloud
installation. The default root directory is /opt.
● You must add your Edge global system administrator credentials to the
external LDAP. Remember that by default, the sysadmin credentials are stored
in the Edge internal LDAP. Once you switch to the external LDAP, your
sysadmin credentials will be authenticated there instead. Therefore, you must
provision the credentials to the external system before enabling external
authentication in Edge.
For example if you have configured and installed Apigee Edge for Private
Cloud with global system administrator credentials as:
username: edgeuser@mydomain.com
● password: Secret123
● Then the user edgeuser@mydomain.com with password Secret123 must
also be present in the external LDAP.
● If you are running a Management Server cluster, note that you must perform
all of the steps in this document for each Management Server.
1. Important: Decide now whether you intend to use the indirect or direct binding
authentication method. This decision will affect some aspects of the
configuration. See External Authentication.
2. Important: You must do these config steps on each Apigee Edge
Management Server (if you are running more than one).
3. Open /opt/apigee/customer/application/management-
server.properties in a text editor. If the file does not exist, create it.
4. Add the following line:
5. conf_security_authentication.user.store=externalized.authentication
6. Note: Be sure that there are no trailing spaces at the end of the line.
This line adds the external authentication feature to your Edge for Private
Cloud installation.
7. To make this step easy, we have created two well-commented sample
configurations -- one for direct and one for indirect binding authentication.
See the samples below for the binding you wish to use, and complete the
configuration:
a. DIRECT BINDING configuration sample
b. INDIRECT BINDING configuration sample
8. Restart the Management Server:
9. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
10. Verify that the server is running:
11. /opt/apigee/apigee-service/bin/apigee-all status
12. Important: You must do an additional configuration under either (or both) of
the following circumstances:
a. If you intend to have users log in using usernames that are not email
addresses. In this case, your sysadmin user must also authenticate
with a username.
AND/OR
b. If the password for your sysadmin user account in your external LDAP
is different from the password you configured when you first installed
Apigee Edge for Private Cloud. See Configuration required for different
sysadmin credentials.
## The next seven properties are needed regardless of direct or indirect binding. You
need to
## configure these per your external authentication installation.
## The IP or domain for your external LDAP instance.
conf_security_externalized.authentication.server.url=ldap://localhost:389
## Change these baseDN values to match your external LDAP service. This attribute
value will be
## provided by your external LDAP administrator, and may have more or fewer dc
elements depending
## on your setup.
conf_security_externalized.authentication.user.store.baseDN=dc=apigee,dc=com
## Identifies the external LDAP property you want to bind against for Authentication.
For
## example if you are binding against an email address in Microsoft Active Directory,
this would be
## the userPrincipalName property in your external LDAP instance. Alternatively if
you are binding
## against the user's ID, this would typically be in the sAMAccountName property:
conf_security_externalized.authentication.user.store.user.attribute=userPrincipalN
ame
## The LDAP attribute where the user email value is stored. For direct binding with
AD, set it to
## userPrincipalName.
conf_security_externalized.authentication.user.store.user.email.attribute=userPrin
cipalName
## The next seven properties are needed regardless of direct or indirect binding. You
need to
## configure these per your external LDAP installation.
## The IP or domain for your external LDAP instance.
conf_security_externalized.authentication.server.url=ldap://localhost:389
## Change these baseDN values to match your external LDAP service. This attribute
value will be
# provided by your external LDAP administrator, and may have more or fewer dc
elements
# depending on your setup.
conf_security_externalized.authentication.user.store.baseDN=dc=apigee,dc=com
## Identifies the external LDAP property you want to bind against for Authentication.
For example
## if you are binding against an email address, this would typically be in the
## userPrincipalName property in your external LDAP instance. Alternatively if you
are binding
## against the user's ID, this would typically be in the sAMAccountName property.
## See also "Configuration required for different sysadmin credentials".
conf_security_externalized.authentication.user.store.user.attribute=userPrincipalN
ame
## Used by Apigee to perform the Authorization step and currently, Apigee only
supports email
## address for Authorization. Make sure to set it to the attribute in your external
LDAP that
## stores the user's email address. Typically this will be in the userPrincipalName
property.
conf_security_externalized.authentication.user.store.user.email.attribute=userPrin
cipalName
## The external LDAP username (for a user with search privileges on the external
LDAP) and
## password and whether the password is encrypted. You must also set the
attribute
## externalized.authentication.bind.direct.type to false.
## The password attribute can be encrypted or in plain text. See
## "Indirect binding only: Encrypting the external LDAP user's password"
## for encryption instructions. Set the password.encrypted attribute to "true" if the
password is
## encrypted. Set it to "false" if the password is in plain text.
conf_security_externalized.authentication.indirect.bind.server.admin.dn=myExtLda
pUsername
conf_security_externalized.authentication.indirect.bind.server.admin.password=my
ExtLdapPassword
conf_security_externalized.authentication.indirect.bind.server.admin.password.enc
rypted=true
However, this link is not integrated with an external authentication server, so you can
hide it by using the following procedure:
1. Open the ui.properties file in an editor. If the file does not exist, create it:
2. vi /inst_root/apigee/customer/application/ui.properties
3. Set the conf_apigee_apigee.feature.disablepasswordreset token
to "true" in ui.properties:
4. conf_apigee_apigee.feature.disablepasswordreset="true"
5. Save your changes.
6. Restart the Edge UI:
7. /inst_root/apigee/apigee-service/bin/apigee-service edge-ui restart
Note: Using plain text passwords in config files may be adequate for testing purposes; however,
for production environments, encryption is highly recommended.
The following steps explain how to encrypt your password:
2.
3. where /opt/apigee/edge-management-server/conf/ is the path to
the edge-management-server's credential.properties file.
4. In the output of the command, you will see a newline followed by what looks
like a random character string. Copy that string.
5. Edit /opt/apigee/customer/application/management-
server.properties.
6. Update the following property, replacing myAdPassword with the string you
copied from step 2, above.
7. conf_security_externalized.authentication.indirect.bind.server.admin.passwor
d=myAdPassword
8. Be sure the following property is set to true:
9. conf_security_externalized.authentication.indirect.bind.server.admin.passwor
d.encrypted=true
10. Save the file.
11. Restart the Management Server:
12. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
13. Verify that the server is running:
14. /opt/apigee/apigee-service/bin/apigee-all status
See the testing section at the end of Enabling external authentication, and perform
the same test described there.
● If usernames are email addresses, use the setup.sh utility to update the
Edge UI
● If the usernames are IDs, instead of an email address, use API calls and
property files to update the Edge UI
2. IS_EXTERNAL_AUTH="true"
3. The IS_EXTERNAL_AUTH property configures Edge to support an account
name, rather than an email address, as the username.
4. Use the apigee-setup utility to reset the username on all Edge component
from the config file:
5. /opt/apigee/apigee-setup/bin/setup.sh -p edge -f configFile
6. You must run this command on all Edge component on all Edge nodes,
including: Management Server, Router, Message Processor, Qpid, Postgres.
Verify that you can access the central POD. On the Management Server, run the
following CURL command:
[{
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [
"application-datastore",
"scheduler-datastore",
"management-server",
"auth-datastore",
"apimodel-datastore",
"user-settings-datastore",
"audit-datastore"
],
"uUID" : "d4bc87c6-2baf-4575-98aa-88c37b260469"
}, {
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "started.at",
"value" : "1454691312854"
}, ... ]
},
"type" : [ "qpid-server" ],
"uUID" : "9681202c-8c6e-4da1-b59b-23e3ef092f34"
}]
Warning: Important: You must do the following steps on each Apigee Edge Management Server.
1. Open /opt/apigee/customer/application/management-
server.properties in a text editor.
2. Set the conf_security_authentication.user.store property to
ldap.Note: Be sure that there are no trailing spaces at the end of the line.
3. conf_security_authentication.user.store=ldap
4. OPTIONALLY, only applicable if you were using a non-email address
username or a different password in your external LDAP for your sysadmin
user. Follow the steps you previously followed in Configuration required for
different sysadmin credentials, but substitute the external LDAP username
with your Apigee Edge sysadmin user's email address.
5. Restart the Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
7. Verify that the server is running:
8. /opt/apigee/apigee-service/bin/apigee-all status
9. Important: An Edge organization administrator must take the following
actions after external authentication is turned off:
● Make sure there are no users in Apigee Edge that should not be there.
You need to manually remove those users.
● Communicate to users that because the external authentication has
been turned off, they need to either start using whatever their original
password was (if they remember) or complete a "forgot password"
process in order to log in.
Prerequisites
● You must be an Apigee Private Cloud system administrator with global
system admin credentials to perform this configuration.
● You need to know the root directory of your Apigee Edge Private Cloud
installation. The default root directory is /opt.
1. Before you can complete the following configuration, you must create a Java
class that implements the ExternalRoleMapperServiceV2 interface and
include your implementation in the Management Server classpath:
2. /opt/apigee/edge-management-server/lib/thirdparty/
3. For details about this implementation, see the section About the
ExternalRoleMapperImpl sample implementation later in this document.
4. Log in to your Apigee Edge management server and then stop the
management server process:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-server stop
6. Open /opt/apigee/customer/application/management-
server.properties in a text editor. If this file does not exist, create it.
7. Edit the properties file to make the following settings:
8. # The user store to be used for authentication.
9. # Use "externalized.authentication" for LDAP user store.
10. # Note that for authorization, we continue to use LDAP.
# See Enabling external authentication more on enabling external auth.
conf_security_authentication.user.store=externalized.authentication
externalized.authentication.role.mapper.implementation.class=com.customer.autho
rization.impl.ExternalRoleMapperImpl
/opt/apigee/edge-management-server/lib/thirdparty/
Warning: To compile this class, you must reference the following JAR file included with Edge:
/opt/apigee/edge-management-server/lib/infra/libraries/authentication-1.0.0.jar
You can name the class and package whatever you wish as long as it implements
ExternalRoleMapperServiceV2, is accessible in your classpath, and is referenced
correctly in the management-server.properties config file.
Note: For better readability, we recommend that you copy the code into a text editor or IDE with
wider margins.
package com.customer.authorization.impl;
import com.apigee.authentication.*;
import com.apigee.authorization.namespace.OrganizationNamespace;
import com.apigee.authorization.namespace.SystemNamespace;
import java.util.Collection;
import java.util.HashSet;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
/** *
* Sample Implementation constructed with dummy roles with expected
namespaces.
*/
@Override
public void start(ConfigBean arg0) throws ConnectionException {
try {
// Customer Specific Implementation will override the
// ImplementDirContextCreationLogicForSysAdmin method implementation.
// Create InitialDirContext based on the system admin user credentials.
dirContext = ImplementDirContextCreationLogicForSysAdmin();
} catch (NamingException e) {
// TODO Auto-generated catch block
throw new ConnectionException(e);
}
}
@Override
public void stop() throws Exception {
}
/**
* This method should be replaced with customer's implementation
* For given roleName under expectedNamespace, return all users that belongs to
this role
* @param roleName
* @param expectedNamespace
* @return All users that belongs to this role. For each user, please return the
username/email that is stored in Apigee LDAP
* @throws ExternalRoleMappingException
*/
@Override
public Collection<String> getUsersForRole(String roleName, NameSpace
expectedNamespace) throws ExternalRoleMappingException {
Collection<String> users = new HashSet<>();
if (expectedNamespace instanceof SystemNamespace) {
//If requesting all users with sysadmin role
if (roleName.equalsIgnoreCase("sysadmin")) {
//Add sysadmin's email to results
users.add("sysadmin@wacapps.net");
}
} else {
String orgName = ((OrganizationNamespace)
expectedNamespace).getOrganization();
//If requesting all users of engRole in Apigee LDAP
if (roleName.equalsIgnoreCase("engRole")) {
//Get all users in corresponding groups in customer's LDAP. In this case
looking for 'engGroup';
SearchControls controls = new SearchControls();
controls.setSearchScope(1);
try {
NamingEnumeration<SearchResult> res =
dirContext.search("ou=groups,dc=corp,dc=wacapps,dc=net",
"cn=engGroup", new Object[]{"",""}, controls);
while (res.hasMoreElements()) {
SearchResult sr = res.nextElement();
//Add all users into return
users.addAll(sr.getAttributes().get("users").getAll());
}
} catch (NamingException e) {
//Customer needs to handle the exception here
}
}
}
return users;
}
/**
*
* This method would be implemented by the customer and would be invoked
* while including using X-Apigee-Current-User header in request.
*
* X-Apigee-Current-User allows the customer to login as another user
*
* Below is the basic example.
*
* If User has sysadmin role then it's expected to set SystemNameSpace
* along with the expected NameSpace. Otherwise role's expectedNameSpace
* to be set for the NameSpacedRole.
*
* Collection<NameSpacedRole> results = new HashSet<NameSpacedRole>();
*
* NameSpacedRole sysNameSpace = new NameSpacedRole("sysadmin",
* SystemNamespace.get());
*
* String orgName =
* ((OrganizationNamespace) expectedNameSpace).getOrganization();
*
* NameSpacedRole orgNameSpace = new NameSpacedRole ("orgadmin",
* expectedNameSpace);
*
* results.add(sysNameSpace);
*
* results.add(orgNameSpace);
*
*
* @param username UserA's username
* @param password UserA's password
* @param requestedUsername UserB's username. Allow UserA to request UserB's
userroles with
* UserA's credentials when requesting UserB as X-Apigee-Current-
User
* @param expectedNamespace
* @return
* @throws ExternalRoleMappingException
*/
@Override
public Collection<NameSpacedRole> getUserRoles(String username, String
password, String requestedUsername, NameSpace expectedNamespace) throws
ExternalRoleMappingException {
/************************************************************/
/******************** Authenticate UserA ********************/
/************************************************************/
/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
return apigeeEdgeRoleMapper(dirContext, requestedDnName,
expectedNamespace);
} finally {
// Customer implementation to close
// ActiveDirectory/LDAP context.
}
return null;
/**
*
* This method would be implemented by the customer and would be invoked
* wihle using username and password for authentication and without the
* X-Apigee-Current-User header
*
* The customer can reuse implementations in
* getUserRoles(String username, String password, String requestedUsername,
NameSpace expectedNamespace)
* by
* return getUserRoles(username, password, username, expectedNamespace)
* in implementations.
*
* or the customer can provide new implementations as shown below.
*/
@Override
public Collection<NameSpacedRole> getUserRoles(String username, String
password, NameSpace expectedNamespace) throws
ExternalRoleMappingException {
/*************************************************************/
/****************** Authenticate Given User ******************/
/*************************************************************/
if (dnName == null) {
System.out.println("Error ");
}
/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
return apigeeEdgeRoleMapper(dirContext, dnName, expectedNamespace);
} finally {
// Customer implementation to close
// ActiveDirectory/LDAP context.
}
return null;
}
/**
*
* This method would be implemented by the customer and would be invoked
* while using security token or access token as authentication credentials.
*
*/
@Override
public Collection<NameSpacedRole> getUserRoles(String username, NameSpace
expectedNamespace) throws ExternalRoleMappingException {
/*************************************************************/
/****************** Authenticate Given User ******************/
/*************************************************************/
if (dnName == null) {
System.out.println("Error ");
}
/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
return apigeeEdgeRoleMapper(dirContext, dnName, expectedNamespace);
} finally {
// Customer implementation to close
// ActiveDirectory/LDAP context.
}
return null;
}
/**
* This method should be replaced with Customer Specific Implementations
*
* Provided as a sample Implementation of mapping user groups to apigee-edge
roles
*/
private Collection<NameSpacedRole> apigeeEdgeRoleMapper(DirContext
dirContext, String dnName, NameSpace expectedNamespace) throws Exception {
/****************************************************/
/************ Fetch internal groups *****************/
/****************************************************/
if (groups.hasMoreElements()) {
while (groups.hasMoreElements()) {
SearchResult searchResult = groups.nextElement();
Attributes attributes = searchResult.getAttributes();
String groupName = attributes.get("name").get().toString();
/************************************************/
/*** Map internal groups to apigee-edge roles ***/
/************************************************/
if (groupName.equals("BusDev")) {
results.add(new
NameSpacedRole("businessAdmin",SystemNamespace.get()));
} else if (groupName.equals("Engineering")) {
if (expectedNamespace instanceof OrganizationNamespace) {
String orgName = ((OrganizationNamespace)
expectedNamespace).getOrganization();
results.add(new NameSpacedRole("orgadmin", new
OrganizationNamespace(orgName)));
// In OPDK 4.50 and later, do
// results.add(new NameSpacedRole("orgadmin",
OrganizationNamespace.of(orgName)));
}
} else if (groupName.equals("Marketing")) {
results.add(new
NameSpacedRole("marketAdmin",SystemNamespace.get()));
} else {
results.add(new NameSpacedRole("readOnly",SystemNamespace.get()));
}
}
} else {
// In case of no group found or exception found we throw empty roles.
System.out.println(" !!!!! NO GROUPS FOUND !!!!!");
}
return results;
}
/**
* The customer need to replace with own implementations for getting dnName for
given user
*/
private String ImplementDnNameLookupLogic(String username) {
// Connect to the customer's own LDAP to fetch user dnName
return customerLDAP.getDnName(username);
}
/**
* The customer need to replace with own implementations for creating DirContext
*/
private DirContext ImplementDirectoryContextCreationLogic() {
// Connect to the customer's own LDAP to create DirContext for given user
return customerLDAP.createLdapContextUsingCredentials();
}
The following image shows authorization and authentication through the Edge UI:
The following image shows authorization and authentication through the Edge APIs:
Figure 2: Authorization and authentication through the Edge APIs
In the following table, values are provided in between " ". When editing the
management-server.properties file, include the value between the quotes (" ") but do
not include the actual quotes.
Prop
DIRECT bind INDIRECT bind
erty
conf_security_externalized.authentication.implementation.class=com.apigee.rbac.i
mpl.LdapAuthenticatorImpl
This property is always required to enable the external authorization
feature. Do not change it.
conf_security_externalized.authentication.bind.direct.type=
conf_security_externalized.authentication.direct.bind.user.directDN=
If the username is an email address, set to "$ Not required, comment out.
{userDN}".
conf_security_externalized.authentication.indirect.bind.server.admin.dn=
conf_security_externalized.authentication.indirect.bind.server.admin.password=
conf_security_externalized.authentication.indirect.bind.server.admin.password.enc
rypted=
conf_security_externalized.authentication.server.url=
conf_security_externalized.authentication.server.version=
conf_security_externalized.authentication.server.conn.timeout=
conf_security_externalized.authentication.user.store.baseDN=
Set to the baseDN value to match your external LDAP service. This value
will be provided by your external LDAP administrator. E.g. in Apigee we
might use "DC=apigee,DC=com"
conf_security_externalized.authentication.user.store.search.query=(&($
{userAttribute}=${userId}))
conf_security_externalized.authentication.user.store.user.attribute=
This identifies the external LDAP property you want to bind against. Set to
whichever property contains the username in the format that your users
use to log into Apigee Edge. For example:
If users will log in with an email address and that credential is stored in
userPrincipalName, set above to "userPrincipalName".
If users will log in with an ID and that is stored in sAMAccountName, set
above to "sAMAccountName".
conf_security_externalized.authentication.user.store.user.email.attribute=
This is the LDAP attribute where the user email value is stored. This is
typically "userPrincipalName" but set this to whichever property in your
external LDAP contains the user's email address that is provisioned into
Apigee's internal authorization LDAP.
● LDAP configuration
Install the Edge UI using these instructions. You can configure either direct or
indirect binding in your silent configuration file.
● Management Server configuration
After you enable Apigee SSO, you should remove all external LDAP properties
that are defined in the
/opt/apigee/customer/application/management-
server.properties file and restart the Management Server.
● Basic authentication for the management API
Basic authentication works for machine users but not LDAP users. These will
be critical if your CI/CD process still uses Basic authentication to access the
system.
● OAuth2 authentication for the management API
LDAP users can access the management API with tokens only.
Configuring TLS/SSL
TLS (Transport Layer Security, whose predecessor is SSL) is the standard security
technology for for ensuring secure, encrypted messaging across your API
environment, from apps to Apigee Edge to your back-end services.
NOTE: Because Edge originally supported SSL, you will see some instances in the Edge UI, Edge
XML, and Edge properties that use the term "SSL". For example, the menu entry in the Edge UI
that you use to view certs is called SSL Certificates, the XML tag that you use to configure a
virtual host to use TLS is named <SSLInfo>, and the property to set the SSL port for the
management API is conf_webserver_ssl.port.
For an on-premises installation of Edge Private Cloud, there are several places where
you can configure TLS:
Note: If you have a certificate chain, all certs in the chain must be appended in order into a single
PEM file, where the last certificate is signed by a CA.
For example, you have a PEM file named server.pem containing your TLS
certificate and a PEM file named private_key.pem containing your private key. Use
the following commands to create the PKCS12 file:
You have to enter the passphrase for the key, if it has one, and an export password.
This command creates a PKCS12 file named keystore.pkcs12.
You are prompted to enter the new password for the JKS file, and the existing
password for the PKCS12 file. Make sure you use the same password for the JKS
file as you used for the PKCS12 file.
If you have to specify a key alias, such as when configuring TLS between a Router
and Message Processor, include the -name option to the openssl command:
You can generate an obfuscated password by using the following command on the
Edge Management Server:
OBF:58fh40h61svy156789gk1saj
MD5:902fobg9d80e6043b394cb2314e9c6
NOTE: Port 8082 on the Message Processor has to be open for access by the Router to perform
health checks when you configure TLS/SSL between the Router and Message Processor. If you
do not configure TLS/SSL between the Router and Message Processor, the default configuration,
port 8082 still must be open on the Message Processor to manage the component, but the
Router does not require access to it.
1. Ensure that port 8082 on the Message Processor is accessible by the Router.
2. Generate the keystore JKS file containing your TLS certification and private
key. For more, see Configuring TLS/SSL for Edge On Premises.
3. Copy the keystore JKS file to a directory on the Message Processor server,
such as /opt/apigee/customer/application.
4. Change permissions and ownership of the JKS file:
chown apigee:apigee /opt/apigee/customer/application/keystore.jks
9. conf/message-processor-
communication.properties+local.http.ssl.keystore.password=OBF:obsPword
10. Where keystore.jks is your keystore file, and obsPword is your obfuscated
keystore and keyalias password.
Note: The value of
11. conf/message-processor-
communication.properties+local.http.ssl.keyalias=apigee-devtest
12. must be the same as the value provided by the -alias option to the
keytool command, described at the end of the section Creating a JKS file.
See Configuring TLS/SSL for Edge On Premises for information on generating
an obfuscated password.
13. Ensure that the message-processor.properties file is owned by the
'apigee' user:
14. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
15. Stop the Message Processors and Routers:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor stop
After TLS is enabled between the Router and Message Processor, the Message
Processor log file contains this INFO message:
This INFO statement confirms that TLS is working between the Router and Message
Processor.
Properties Description
By default, TLS is disabled for the management API and you access the Edge
management API over HTTP by using the IP address of the Management Server
node and port 8080. For example:
http://ms_IP:8080
Alternatively, you can configure TLS access to the management API so that you can
access it in the form:
https://ms_IP:8443
In this example, you configure TLS access to use port 8443. However, that port
number is not required by Edge - you can configure the Management Server to use
other port values. The only requirement is that your firewall allows traffic over the
specified port.
To ensure traffic encryption to and from your management API, configure the
settings in the /opt/apigee/customer/application/management-
server.properties file.
In addition to TLS configuration, you can also control password validation (password
length and strength) by modifying the management-server.properties file.
NOTE: After modifying these files, restart your management API service.
The reason is that ports below 1024 are typically protected by the operating system and require
the process that accesses them to have root access. The Edge Management Server runs as the
"apigee" user and therefore typically does not have access to ports below 1024.
One alternative it to use a load balancer with the Edge API and terminate TLS on the load
balancer on port 443. You can then use either HTTP or HTTPS between the load balancer and the
Edge API.
Another alternative is to use iptables to forward requests to port 443 to port 8443. For
example:
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443
Configure TLS
Edit the /opt/apigee/customer/application/management-
server.properties file to control TLS use on traffic to and from your
management API. If this file does not exist, create it.
1. Generate the keystore JKS file containing your TLS certification and private
key. For more see Configuring TLS/SSL for Edge On Premises.
2. Copy the keystore JKS file to a directory on the Management Server node,
such as /opt/apigee/customer/application.
3. Change ownership of the JKS file to the "apigee" user:
4. chown apigee:apigee keystore.jks
5. Where keystore.jks is the name of your keystore file.
6. Edit /opt/apigee/customer/application/management-
server.properties to set the following properties. If that file does not
exist, create it:
7. conf_webserver_ssl.enabled=true
8. # Leave conf_webserver_http.turn.off set to false
9. # because many Edge internal calls use HTTP.
conf_webserver_http.turn.off=false
conf_webserver_ssl.port=8443
conf_webserver_keystore.path=/opt/apigee/customer/application/keystore.jk
s
# Enter the obfuscated keystore password below.
conf_webserver_keystore.password=OBF:obfuscatedPassword
10. Where keyStore.jks is your keystore file, and obfuscatedPassword is your
obfuscated keystore password. See Configuring TLS/SSL for Edge On
Premises for information on generating an obfuscated password.
11. Restart the Edge Management Server by using the command:
12. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
Note: After enabling TLS, ensure that the value of APIGEE_PORT_HTTP_MS is correct in
$APIGEE_ROOT/customer/defaults.sh. If it isn't correct, modify defaults.sh.
WARNING: Apigee recommends that you disable HTTP access in production environments.
Use the following procedure to configure the Edge UI to make these calls over
HTTPS only:
conf_webserver_ssl.enabled=false To enable/disable
TLS/SSL. With
TLS/SSL enabled
(true), you must also
set the ssl.port and
keystore.path
properties.
Required when
TLS/SSL is enabled
(conf_webserver_s
sl.enabled=true).
Required when
TLS/SSL is enabled
(conf_webserver_s
sl.enabled=true).
OBF:xxxxxxxxxx
OBF:xxxxxxxxxx
Note: By default, if no
blacklist or whitelist is
specified, ciphers
matching the following
Java regular
expression are
excluded by default.
^.*_(MD5|SHA|SHA1)$
^TLS_RSA_.*$
^SSL_.*$
^.*_NULL_.*$
^.*_anon_.*$
However, if you do
specify a blacklist, this
filter is overridden, and
you must blacklist all
ciphers individually.
For information on
cypher suites and
cryptography
architecture, see Java
Cryptography
Architecture Oracle
Providers
Documentation for
JDK 8.
conf_webserver_ssl.session.cache.size=
conf_webserver_ssl.session.timeout=
http://ms_IP:9000
Alternatively, you can configure TLS access to the Edge UI so that you can access it
in the form:
https://ms_IP:9443
In this example, you configure TLS access to use port 9443. However, that port
number is not required by Edge - you can configure the Management Server to use
other port values. The only requirement is that your firewall allows traffic over the
specified port.
Note: Whenever you upgrade an Edge-UI component while installing a patch or major version for
Edge, the TLS settings will be reverted. You will need to reapply the steps described in the
following sections every time you update an Edge-UI component.
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 9443 -j ACCEPT --
verbose
NOTE: This example uses port 9443 for the TLS port, and not the more common port 443. The
reason is that ports below 1024 are typically protected by the operating system and require the
process that accesses them to have root access. The Edge UI runs as the "apigee" user and
therefore typically does not have access to ports below 1024.
One alternative it to use a load balancer with the Edge UI and terminate TLS on the load balancer
on port 443. You can then use either HTTP or HTTPS between the load balancer and the Edge UI.
Another alternative is to use iptables to forward requests to port 443 to port 9443. For
example:
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 9443
Configure TLS
NOTE: If you have multiple UI instances installed on separate nodes, you must configure both to
support TLS. After enabling TLS on the first node, see Supporting multiple Edge UI instances for
information about configuring TLS on the second node.
Use the following procedure to configure TLS access to the management UI:
1. Generate the keystore JKS file containing your TLS certification and private
key and copy it to the Management Server node. For more information, see
Configuring TLS/SSL for Edge On Premises.
2. Run the following command to configure TLS:
3. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl
4. Enter the HTTPS port number, for example, 9443.
5. Specify if you want to disable HTTP access to the management UI. By default,
the management UI is accessible over HTTP on port 9000.
6. Enter the keystore algorithm. The default is JKS.
7. Enter the absolute path to the keystore JKS file.
The script copies the file to the /opt/apigee/customer/conf directory on
the Management Server node, and changes the ownership of the file to
"apigee".
8. Enter the clear text keystore password.
9. The script then restarts the Edge management UI. After the restart, the
management UI supports access over TLS.
You can see these settings in /opt/apigee/etc/edge-ui.d/SSL.sh.
To use a config file, create a new file and add the following properties:
HTTPSPORT=9443
DISABLE_HTTP=y
KEY_ALGO=JKS
KEY_FILE_PATH=/opt/apigee/customer/application/mykeystore.jks
KEY_PASS=clearTextKeystorePWord
Save the file in a local directory with any name you want. Then use the following
command to configure TLS:
The additional configuration is required when the Edge UI sends users emails to set
their password when the user is created or when the user request to reset a lost
password. This email contains a URL that the user selects to set or reset a
password. By default, if the Edge UI is not configured to use TLS, the URL in the
generated email uses the HTTP protocol, and not HTTPS. You must configure the
load balancer and Edge UI to generates an email address that uses HTTPS.
To configure the load balancer, ensure that it sets the following header on requests
forwarded to the Edge UI:
X-Forwarded-Proto: https
These optional parameters are only available when you set the following
configuration property in config file, as described in Using a config file to configure
TLS:
TLS_CONFIGURE=y
NOTE: You can only set these optional parameters in a config file passed to the apigee-
service edge-ui configure-ssl command. You cannot set these properties using the
interactive command.
Property Description
TLS_PROT Defines the default TLS protocol for the Edge UI. By default, it is TLS
OCOL 1.2. Valid values are TLSv1.2, TLSv1.1, TLSv1.
TLS_DISA Defines the disabled cipher suites and can also be used to prevent
BLED_ALG small key sizes from being used for TLS handshaking. There is no
default value.
O
The values passed to the TLS_DISABLED_ALGO correspond to the
allowed values for jdk.tls.disabledAlgorithms as described
here. However, you must escape space characters when setting
TLS_DISABLED_ALGO:
TLS_DISABLED_ALGO=EC\ keySize\ <\ 160,RSA\ keySize\ <\ 2048
"TLS_DHE_RSA_WITH_AES_256_CBC_SHA",
"TLS_DHE_RSA_WITH_AES_128_CBC_SHA",
"TLS_DHE_DSS_WITH_AES_128_CBC_SHA",
"TLS_RSA_WITH_AES_256_CBC_SHA",
"TLS_RSA_WITH_AES_128_CBC_SHA",
"SSL_RSA_WITH_RC4_128_SHA",
"SSL_RSA_WITH_RC4_128_MD5",
"TLS_EMPTY_RENEGOTIATION_INFO_SCSV"
3. TLS_DISABLED_ALGO="tlsv1"
4. To disable multiple protocols—for example, TLSv1.0 and TLSv1.1— add the
following to the config file:
TLS_CONFIGURE=y
5. TLS_DISABLED_ALGO="tlsv1, tlsv1.1"
6. Save your changes to the config file.
7. Run the following command to configure TLS:
8. /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl -f
configFile
9. where configFile is the full path to the config file.
10. Restart the Edge UI:
11. /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
Cookies without the secure flag can potentially allow an attacker to capture and
reuse the cookie or hijack an active session. Therefore, best practice is to enable this
setting.
To confirm the change is working, check the response headers from the Edge UI by
using a utility such as curl; for example:
curl -i -v https://edge_UI_URL
The header should contain a line that looks like the following:
http://newue_IP:3001
Alternatively, you can configure TLS access to the Edge UI so that you can access it
in the form:
https://newue_IP:3001
NOTE: The Edge UI uses the same port for HTTPS and HTTP. That means you cannot access the
Edge UI using both HTTP and HTTPS concurrently.
TLS requirements
The Edge UI only supports TLS v1.2. If you enable TLS on the Edge UI, users must
connect to the Edge UI using a browser compatible with TLS v1.2.
Execute the following command to configure TLS for the Edge UI:
Where configFile is the config file that you used to install the Edge UI.
Before you execute this command, you must edit the config file to set the necessary
properties that control TLS. The following table describes the properties that you use
to configure TLS for the Edge UI:
MANAGEME Sets the protocol, "http" or "https", used to access the Yes
NT_UI_SC Edge UI. The default value is "http". Set it to "https" to
HEME enable TLS:
MANAGEMENT_UI_SCHEME=https
MANAGEME If "n", specifies that TLS requests to the Edge UI are Yes
NT_UI_TL terminated at the Edge UI. You must set
S_OFFLOA MANAGEMENT_UI_TLS_KEY_FILE and
D MANAGEMENT_UI_TLS_CERT_FILE.
MANAGEMENT_UI_SCHEME=https
MANAGEMENT_UI_TLS_OFFLOAD=y
MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI
_SCHEME://$MANAGEMENT_UI_IP:
$MANAGEMENT_UI_PORT
Where:
If MANAGEMENT_UI_TLS_OFFLOAD=y:
MANAGEMENT_UI_TLS_ALLOWED_CIPHERS="TLS_E
CDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
SHOEHORN Before you Install the new Edge UI, you first install the Yes and set
_SCHEME base Edge UI, called shoehorn. The installation config to "http"
file uses the following property to specify the protocol,
"http", used to access the base Edge UI:
SHOEHORN_SCHEME=http
Configure TLS
1. Generate the TLS cert and key as PEM files with no passphrase. For example:
2. mykey.pem
3. mycert.pem
4. There are many ways to generate a TLS cert and key. For example, you can
execute the following command to generate an unsigned cert and key:
5. openssl req -x509 -newkey rsa:4096 -keyout mykey.pem -out mycert.pem -
days 365 -nodes -subj '/CN=localhost'
6. Note: You do not typically use an unsigned cert and key in a production environment.
7. Copy the key and cert files to the
/opt/apigee/customer/application/edge-management-ui
directory. If that directory does not exist, create it.
8. Make sure the cert and key are owned by the "apigee" user:
9. chown apigee:apigee /opt/apigee/customer/application/edge-management-
ui/*.pem
10. Edit the config file that you used to install the Edge UI to set the following TLS
properties:
Note: When enabling TLS, you must pass the complete config file that you used to install
the Edge UI.
# Leave these properties set to the same values as when you installed the Edge UI:
MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI_SCHEME://
$MANAGEMENT_UI_IP:$MANAGEMENT_UI_PORT
11. SHOEHORN_SCHEME=http
12. Execute the following command to configure TLS:
13. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-ssl -f configFile
14. Where configFile is the name of the config file.
The script restarts the Edge UI.
15. Run the following commands to setup and restart shoehorn:
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile
If you have a load balancer that forwards requests to the Edge UI, you might choose
to terminate the TLS connection on the load balancer, and then have the load
balancer forward requests to the Edge UI over HTTP:
This configuration is supported but you need to configure the load balancer and the
Edge UI accordingly.
1. Edit the config file that you used to install the Edge UI to set the following TLS
properties:
NOTE: When enabling TLS, you must pass the complete config file that you used to install
the Edge UI.
# Leave these properties set to the same values as when you installed the Edge UI:
MANAGEMENT_UI_PUBLIC_URIS=$MANAGEMENT_UI_SCHEME://
$MANAGEMENT_UI_IP:$MANAGEMENT_UI_PORT
SHOEHORN_SCHEME=http
2.
3. If you set MANAGEMENT_UI_TLS_OFFLOAD=y, omit
MANAGEMENT_UI_TLS_KEY_FILE and MANAGEMENT_UI_TLS_CERT_FILE.
They are ignored because requests to the Edge UI come in over HTTP.
4. Execute the following command to configure TLS:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-ssl -f configFile
6. Where configFile is the name of the config file.
The script restarts the Edge UI.
7. Run the following commands to setup and restart shoehorn:
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile
1. Edit the config file that you used to install the Edge UI to set the following TLS
property:
NOTE: When disabling TLS, you must pass the complete config file that you used to
install the Edge UI.
# Set to http to disable TLS.
MANAGEMENT_UI_SCHEME=http
2.
3. Execute the following command to disable TLS:
4. /opt/apigee/apigee-service/bin/apigee-service edge-management-ui
configure-ssl -f configFile
5. Where configFile is the name of the config file.
The script restarts the Edge UI.
6. Run the following commands to setup and restart shoehorn:
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile
Note: This feature is only available in Edge for Private Cloud 04.51.00 and later versions.To use
TLS 1.3, nodes hosting the Router must have the Red Hat Enterprise Linux (RHEL) 8.0 operating
system (or later) installed, which supports the openssl 1.1 library.
11. }
12. A sample vhost with this property is shown below:
{
"hostAliases": [
"api.myCompany,com",
],
"interfaces": [],
"listenOptions": [],
"name": "secure",
"port": "443",
"retryOptions": [],
"properties": {
"property": [
{
"name": "ssl_protocols",
"value": "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
}
]
},
"sSLInfo": {
"ciphers": [],
"clientAuthEnabled": "false",
"enabled": "true",
"ignoreValidationErrors": false,
"keyAlias": "myCompanyKeyAlias",
"keyStore": "ref://myCompanyKeystoreref",
"protocols": []
},
"useBuiltInFreeTrialCert": false
13. }
14. Testing TLS 1.3
To test TLS 1.3, enter the following command:
15. curl -v --tlsv1.3 "https://api.myCompany,com/testproxy"
16. Note that TLS 1.3 can only be tested on clients that support this protocol. If
TLS 1.3 is not enabled, you will see an error message like the following:
17. sslv3 alert handshake failure
To learn more about TLS 1.3 feature in Java, see JDK 8u261 Update Release Notes.
The procedure to enable TLS 1.3 depends on the version of Java you're using. See
Check the Java version in a Message Processor below to find the version of Java
installed in the Message Processor.
In the following Java versions, TLS v1.3 feature exists but is not enabled by default in
client roles:
● Oracle JDK 8u261 or later but less than Oracle JDK 8u341
● OpenJDK 8u272 or later but less than OpenJDK 8u352
If you are using one of these versions, you need to enable TLS v1.3, as described in
How to enable TLS v1.3 when it is not enabled by default.
If you are using one of the following versions, TLS v1.3 should already be enabled by
default in client roles (Message Processor acts as a client for southbound TLS
connections), so you don't need to take any action:
For TLS v1.3 to work, all the following must hold true:
● Oracle JDK 8u261 or later but less than Oracle JDK 8u341
● OpenJDK 8u272 or later but less than OpenJDK 8u352
In the message processor, set Java property jdk.tls.client.protocols.
Values are comma separated and can contain one or more of TLSv1, TLSv1.1,
TLSv1.2, TLSv1.3, and SSLv3.
See to Change other JVM properties to learn how to set JVM properties in an Edge
component.
● Configure your target server's SSLInfo and ensure that TLSv1.3 is not
mentioned in the protocols list. See TLS/SSL TargetEndpoint Configuration
Elements. Note: If no protocols are specified in the target server configuration,
whatever protocols are supported by Java will be sent as options in client
handshake.
● Disable TLS v1.3 in the message processor by disabling protocol completely.
See Set the TLS protocol on the Message Processor.
java -version
$ java -version
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (build 1.8.0_312-b07)
OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
Supported ciphers
At present, Java 8 supports 2 TLS v1.3 ciphers:
● TLS_AES_256_GCM_SHA384
● TLS_AES_128_GCM_SHA256
You can use openssl to check if your target server supports TLS v1.3 and at least
one of the ciphers above using below. Note that this example uses the openssl11
utility which has TLS v1.3 enabled.
For the Router, you can also set the protocol for individual virtual hosts. See
Configuring TLS access to an API for the Private Cloud for more.
For the Message Processor, you can set the protocol for an individual
TargetEndpoint. See Configuring TLS from Edge to the backend (Cloud and Private
Cloud) for more.
1. Open the router.properties file in an editor. If the file does not exist,
create it:
2. vi /opt/apigee/customer/application/router.properties
3. Set the properties as desired:
4. # Possible values are space-delimited list of: TLSv1 TLSv1.1 TLSv1.2
5. conf_load_balancing_load.balancing.driver.server.ssl.protocols=TLSv1.2
6. Save your changes.
7. Make sure the properties file is owned by the "apigee" user:
8. chown apigee:apigee /opt/apigee/customer/application/router.properties
9. Restart the Router:
10. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
11. Verify that the protocol is updated correctly by examining the NGINX file
/opt/nginx/conf.d/0-default.conf:
12. cat /opt/nginx/conf.d/0-default.conf
13. Ensure that the value for ssl_protocols is TLSv1.2.
14. If you re using two-way TLS with a virtual host, you must also set the TLS
protocol in the virtual host as described in Configuring TLS access to an API
for the Private Cloud.
# Specify the ciphers that the Message Processor supports. (You must
separate ciphers with a comma.):
conf_message-processor-communication_local.http.ssl.ciphers=cipher[,...]
7. Possible values for conf_message-processor-
communication_local.http.ssl.ciphers are:
● TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
● TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
● TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
● TLS_RSA_WITH_AES_256_GCM_SHA384
● TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
● TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
● TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
● TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
8. For example:
conf/system.properties+https.protocols=TLSv1.2
conf/jvmsecurity.properties+jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1
9. conf_message-processor-
communication_local.http.ssl.ciphers=TLS_ECDHE_ECDSA_WITH_AES_256_G
CM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE
_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA38
4,TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDH_RSA_WITH_A
ES_256_GCM_SHA384,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_EC
DHE_RSA_WITH_AES_256_CBC_SHA384
10. For a complete list of related properties, see Configuring TLS between a
Router and a Message Processor.
11. Save your changes.
12. Make sure the properties file is owned by the "apigee" user:
13. chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
14. Restart the Message Processor:
15. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restart
16. If you are using two-way TLS with the backend, set the TLS protocol in the
virtual host as described in Configuring TLS from Edge to the backend (Cloud
and Private Cloud).
3. # conf_cassandra_require_client_auth=true
4. Ensure that the file cassandra.properties is owned by the apigee user:
chown apigee:apigee \
5. /opt/apigee/customer/application/cassandra.properties
Execute the following steps on each Cassandra node, one at a time, so the changes
take effect without causing any downtime for users:
4. apigee-cassandra stop
5. Restart the Cassandra service:
/opt/apigee/apigee-service/bin/apigee-service \
6. apigee-cassandra start
7. To determine if the TLS encryption service has started, check the system logs
for the following message:
8. Starting Encrypted Messaging Service on TLS port
1. Add the certificate for each unique generated key pair (see Appendix) to an
existing Cassandra node's truststore, such that both the old certificates and
new certificates exist in the same truststore:
keytool -import -v -trustcacerts -alias NEW_ALIAS \
6. apigee-cassandra restart
7. On each Cassandra node in the cluster, update the properties shown below to
the new keystore values in the cassandra.properties file:
conf_cassandra_keystore=NEW_KEYSTORE_PATH
conf_cassandra_keystore_password=NEW_KEYSTORE_PASSOWRD
8.
where NEW_KEYSTORE_PATH is the path to the directory where the keystore file is
located and NEW_KEYSTORE_PASSWORD is the keystore password set when the
certificates
Appendix
The following example explains how to prepare server certificates needed to perform
the internode encryption steps. The commands shown in the example use the
following parameters:
Parameter Description
keypass The keypass must be the same for both the keystore
and the key.
-validity The value set on this flag makes the generated key
pair valid for 10 years.
1. Go to the following directory:
2. cd /opt/apigee/data/apigee-cassandra
3. Run the following command to generate a file named keystore.node0 in
the current directory:
keytool -genkey -keyalg RSA -alias node0 -validity 3650 \
-keystore keystore.node0 -storepass keypass \
-keypass keypass -dname "CN=10.128.0.39, OU=None, \
7. -keystore keystore.node0
8. Ensure the file is readable by the apigee user only and by no one else:
$ chown apigee:apigee \
/opt/apigee/data/apigee-cassandra/keystore.node0
14. $ openssl pkcs12 -in node0.p12 -nodes -nocerts -out node0.key.pem -passin
pass:keypass
15. For node-to-node encryption, copy the node0.cer file to each node and
import it to the truststore of each node.
keytool -import -v -trustcacerts -alias node0 \
Architectural overview
To provide secure communications between components, Apigee mTLS uses a
service mesh that establishes secure, mutually authenticated TLS connections
between components.
The following image shows connections between Apigee components that Apigee
mTLS secures (in red). The ports shown in the image are examples; refer to Port
usage for a list of ranges that each component can use.
(Note that ports indicated with an "M" are used to manage the component and must
be open on the component for access by the Management Server.)
As you can see in the diagram above, Apigee mTLS adds security to connections
between most components in the cluster, including:
Source Destination
Message encryption/decryption
The Apigee mTLS service mesh consists of Consul servers that run on each
ZooKeeper node in your cluster and the following Consul services on every node in
the cluster:
● An egress proxy that intercepts outgoing messages on the host node. This
service encrypts outgoing messages before sending them to their destination.
● An ingress proxy that intercepts incoming messages on the host node. This
service decrypts incoming messages before sending them to their final
destination.
For example, when the Management Server sends a message to the Router, the
egress proxy service intercepts the outgoing message, encrypts it, and then sends it
to the Router. When the Router's node receives the message, the ingress proxy
service decrypts the message and then passes it to the Router component for
processing.
This all happens transparently to the Edge components: they are unaware of the
encryption and decryption process carried out by the Consul proxy services.
In addition, Apigee mTLS uses the iptables utility, a Linux firewall service that
manages traffic redirection.
● component_name-port-IP-ingress
● component_name-port-IP-egress
Requirements
Before you can install Apigee mTLS, your environment must meet the following
requirements:
Requirement Description
Versions ● 4.51.00
● 4.50.00
● 4.19.06
Operating
Supported Private Cloud Version
System
Oracle
Linux
Utilities/packages
Apigee mTLS requires that you have the following packages installed and enabled on
each machine in your cluster, including your administration machine, before you
begin the installation process:
OK to Remove
Utility/package Description
After Installation?
During installation, you also install the Consul package on the administration
machine so that you can generate credentials and the encryption key.
The apigee-mtls package installs and configures the Consul servers including the
ingress and egress proxies on ZooKeeper nodes in the cluster.
Before installing, create a new user account or ensure that you have access to one
that has elevated priveleges.
The account that executes the Apigee mTLS installation on each node in the cluster
must be able to:
Apigee recommends that you have a node within the cluster on which you can
perform various administrative tasks described in this document, including:
This section describes port usage and port assignments to support Consul
communications with Apigee mTLS.
You must open most of the following ports on the Consul server nodes (the nodes
running ZooKeeper) to accept requests from all nodes in the cluster:
* Apigee recommends that you restrict inbound requests to cluster members only (including cross-datastore). You can do this with
iptables.
As this table shows, the nodes running the consul-server component (ZooKeeper
nodes) must open ports 8301, 8302, 8502, and 8503 to all members of the cluster
that are running the apigee-mtls service, even across data centers. Nodes that are
not running ZooKeeper do not need to open these ports.
Port assignments for all Consul nodes (including nodes running ZooKeeper)
OpenLDAP 10200 to 1
10299
Postgres 10300 to 3
10399
Qpid 10400 to 2
10499
Router 10600 to 2
10699
ZooKeeper 10001 to 3
10099
Consul assigns ports in a simple linear fashion. For example, if your cluster has two
Postgres nodes, the first node uses two ports, so Consul assigns ports 10300 and
10301 to it. The second node also uses two ports, so Consol assigns 10302 and
10303 to that node. This applies to all component types.
As you can see, the actual number of ports depends on topology: If your cluster has
two Postgres nodes, then you'll need to open four ports (two nodes times two ports
each).
In a multi-data center configuration, all hosts running mTLS must also open port
8302.
You can customize the default ports that Apigee mTLS uses. For information on how
to do this, see Proxy port range customization.
Limitations
Apigee mTLS has the following limitations:
Terminology
This section uses the following terminology:
Term Definition
cluster The group of machines that make up your Edge for the
Private Cloud installation.
● apigee-cassandra
● apigee-postgresql
● apigee-zookeeper
While you do not reinstall ZooKeeper in your cluster, Apigee recommends that you
back up its data prior to installing Apigee mTLS.
For instructions on how to back up the data for these components, see How to back
up.
If you have previously disabled the localhost loopback address, you must re-
enable it on all nodes in your cluster.
The order of the nodes on which you do this does not matter.
To uninstall firewalld and ensure iptables is installed and running:
1. Log in to a node as root (or use sudo with the commands). Which node you
choose and the order in which you choose the nodes does not matter.
2. Stop all Apigee services by using the stop command, as the following
example shows:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. You do not restart the components until after you install and configure Apigee
mTLS.
5. Check that all services are stopped by using the status command, as the
following example shows:
6. /opt/apigee/apigee-service/bin/apigee-all status
7. Install Apigee mTLS by executing the following command:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls install
9. This command installs the following RPMs with your Edge for the Private
Cloud installation:
● apigee-mtls
● apigee-mtls-consul
10. Repeat steps 1 through 4 on each node in the cluster.
After installing Apigee mTLS on all nodes in the cluster, perform the following
steps:
After you install Apigee mTLS on a node, when you restart components on that node,
you must start the apigee-mtls component before any other component on that
node.
After you update your configuration file with the mTLS-related properties, copy it to
all nodes in the cluster before you initialize the apigee-mtls component on those
nodes.
TIP Commands in this section refer to the configuration file as config_file, which indicates that
its location is variable, depending on where you store it on each node.
ENABLE_SIDECAR_PROXY="y"
ENCRYPT_DATA="BASE64_GOSSIP_MESSAGE"
PATH_TO_CA_CERT="PATH/TO/consul-agent-ca.pem"
PATH_TO_CA_KEY="PATH/TO/consul-agent-ca-key.pem"
APIGEE_MTLS_NUM_DAYS_CERT_VALID_FOR="NUMBER_OF_DAYS"
6. Set the value of each property to align with your configuration.
7. The following table describes these configuration properties:
Property Description
You must choose one of the following methods to generate your credentials:
You generate a single version of each of these files once only. You then copy the key
and certificate files to all the nodes in your cluster, and add the encryption key to
your configuration file that you also copy to all nodes.
For more information about Consul's encryption implementation, see the following:
● Encryption
● Securing Agent Communication with TLS Encryption
1. On your administration machine, download the Consul 1.6.2 binary from the
HashiCorp website.
2. Extract the contents of the downloaded archive file. For example, extract the
contents to /opt/consul/.NOTE: Continue with this procedure to use Consul to
generate the credentials (recommended). If you want to use your existing credentials,
see Integrate custom certificates with Apigee mTLS (advanced).
3. On your administration machine, create a new Certificate Authority (CA) by
executing the following command:
4. /opt/consul/consul tls ca create
5. Consul creates the following files, which form a certificate/key pair:
● consul-agent-ca.pem (certificate)
● consul-agent-ca-key.pem (key)
6. By default, certificate and key files are X509v3 encoded.
Later, you will copy these files to all nodes in the cluster. At this time, however,
you must only decide where on the nodes you will put these files. They should
be in the same location on each node. For example, /opt/apigee/.
7. In the configuration file, set the value of PATH_TO_CA_CERT to the location to
which you will copy the consul-agent-ca.pem file on the node. For
example:
8. PATH_TO_CA_CERT="/opt/apigee/consul-agent-ca.pem"
9. Set the value of PATH_TO_CA_KEYto the location to which you will copy the
consul-agent-ca-key.pem file on the node. For example:
10. PATH_TO_CA_KEY="/opt/apigee/consul-agent-ca-key.pem"
11. Create an encryption key for Consul by executing the following command:
12. /opt/consul/consul keygen
13. Consul outputs a randomized string that looks similar to the following:
14. QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY=
15. Copy this generated string and set it as the value of the ENCRYPT_DATA
property in your configuration file. For example:
16. ENCRYPT_DATA="QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY="
17. NOTE: The value above is an example encryption key. It is not the actual string you will
use. You must generate your own.
18. Save your configuration file.
The following example shows the mTLS-related settings in a configuration file (with
example values):
...
IP1=10.126.0.121
IP2=10.126.0.124
IP3=10.126.0.125
IP4=10.126.0.127
IP5=10.126.0.130
ALL_IP="$IP1 $IP2 $IP3 $IP4 $IP5"
LDAP_MTLS_HOSTS="$IP3"
ZK_MTLS_HOSTS="$IP3 $IP4 $IP5"
CASS_MTLS_HOSTS="$IP3 $IP4 $IP5"
PG_MTLS_HOSTS="$IP2 $IP1"
RT_MTLS_HOSTS="$IP4 $IP5"
MS_MTLS_HOSTS="$IP3"
MP_MTLS_HOSTS="$IP4 $IP5"
QP_MTLS_HOSTS="$IP2 $IP1"
ENABLE_SIDECAR_PROXY="y"
ENCRYPT_DATA="QbhgD+EXAMPLE+Y9u0742X/IqX3X429/x1cIQ+JsQvY="
PATH_TO_CA_CERT="/opt/apigee/consul-agent-ca.pem"
PATH_TO_CA_KEY="/opt/apigee/consul-agent-ca-key.pem"
...
● Configuration file: Copy the updated version of this file and replace the
existing version on all nodes (not just the nodes running ZooKeeper).
● consul-agent-ca.pem: Copy to the location you specified as the value of
PATH_TO_CA_CERT in the configuration file.
● consul-agent-ca-key.pem: Copy to the location you specified as the value of
PATH_TO_CA_KEY in the configuration file.
Be sure that the locations to which you copy the certificate and key files match the
values you set in the configuration file in Step 2: Install Consul and generate
credentials.
NOTE: This process also involves reinstalling Cassandra and Postgres on their respective nodes.
To initialize apigee-mtls:
1. Log in to a node in the cluster as the root user. You can perform these steps
on the nodes in any order you want.
2. Make the apigee:apigee user an owner of the updated configuration file, as
the following example shows:
3. chown apigee:apigee config_file
4. Configure the apigee-mtls component by executing the following
command:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f config_file
6. (Optional) Execute the following command to verify that your setup was
successful:
7. /opt/apigee/apigee-mtls/lib/actions/iptables.sh validate
8. Start Apigee mTLS by executing the following command:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
10. After installing Apigee mTLS, you must start this component before any other
components on the node.
11. (Cassandra nodes only) Cassandra requires additional arguments to work
within the security mesh. As a result, you must execute the following
commands on each Cassandra node:
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra setup -f config_file
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure
CAUTION: Apigee mTLS setup and configuration is not idempotent. As a result, the apigee-
mtls configuration entries must be identical on all nodes: if you make a change on one node,
you must make that same change on all nodes. If you do not, the service mesh may fail.
● If you change a configuration file, you must first uninstall apigee-mtls and
re-run setup or configure:
● # DO THIS:
● /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall
●
# BEFORE YOU DO THIS:
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f file
OR
/opt/apigee/apigee-service/bin/apigee-service apigee-mtls configure
● You must uninstall and re-run setup or configure on all nodes in the
cluster, not just a single node.
NOTE: Repeatedly executing setup or configure on apigee-mtls does not act the same as
other Apigee services. It will not necessarily generate a new configuration (or modify the local
configuration). As a result, you must uninstall mTLS when you change the configuration.
Apigee mTLS is supported on Apigee Edge for Private Cloud versions 4.51.00, 4.50.00, 4.19.06,
and 4.19.01. You cannot use apigee-mtls on versions 4.18.* or lower.
Upgrading
To upgrade Apigee mTLS:
● You uninstalled firewalld from the node and replaced it with iptables, as
described in Replace the default firewall.
● You stopped all Apigee components on the node, including apigee-mtls.
1. Log in to a node in your cluster. The order in which you do this does not
matter.
2. Stop all components on the node, as the following example shows:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. Execute the validate command, as the following example shows:
5. /opt/apigee/apigee-mtls/lib/actions/iptables.sh validate
6. iptables sends messages to every port that either Consul or the local
Apigee services use. If the script encounters an invalid rule or a failed route, it
displays an error.
If any Apigee services or Consul servers are running on the node, then this
command will fail.
7. Start the apigee-mtls component before all other components on the node
by executing the following command:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
9. Start the remaining Apigee components on the node in the start order, as the
following example shows:
10. /opt/apigee/apigee-service/bin/apigee-service component_name start
11. Repeat these steps on all nodes in the cluster. Ideally, do this on all nodes
within 5 minutes of having started on the first node.
To check the quorum status, log in to each node running ZooKeeper and execute the
following command:
This command displays a list of the Consul instances and their statuses, as the
following example shows:
In addition, you can get information about the cluster's health, including whether the
cluster's quorum has formed and if remote members are impairing functionality. To
do this, use the following command:
● You must uninstall and reinstall apigee-mtls with the new values.
● Consul proxies cannot listen on the same ports as Apigee Services.
● Consul has only one port address space. This means that if proxy A on host A
listens on port 15000, then proxy B on host B cannot listen on port 15000.
● Be sure that you review Apigee port requirements to ensure no collisions
occur.
You can customize the ports that are used by the proxies to suit your particular
configuration.
Report structure
{
"192.168.1.1": {
"datacenter_member": "dc-1",
"daemons": {
"zookeeper-ingress": {
"ingress": true,
"name": "zk-2888-192-168-1-1",
"listeners": [
{
"purpose": "terminate service mesh for zk port 2888",
"ip_address": "192.168.1.1",
"port": 10001,
}
]
},
"consul-server": {
.
.
.
}
}
},
"192.168.1.2": { }
.
.
.
}
Default
Node Ran Description
ge
SMI_PROXY_MINIMUM_EGRESS_PROXY_PORT
SMI_PROXY_MAXIMUM_EGRESS_PROXY_PORT
SMI_PROXY_MINIMUM_CASSANDRA_PROXY_PORT
SMI_PROXY_MAXIMUM_CASSANDRA_PROXY_PORT
SMI_PROXY_MAXIMUM_MESSAGEPROCESSOR_PRO
XY_PORT
SMI_PROXY_MINIMUM_LDAP_PROXY_PORT
SMI_PROXY_MAXIMUM_LDAP_PROXY_PORT
Postg 10300 Each host with an apigee-postgres installation
re to requires three ports in the specified range.
s 1039
9 You define a custom range by setting the minimum
and maximum port numbers with the following
properties:
SMI_PROXY_MINIMUM_POSTGRES_PROXY_PORT
SMI_PROXY_MAXIMUM_POSTGRES_PROXY_PORT
SMI_PROXY_MINIMUM_QPID_PROXY_PORT
SMI_PROXY_MAXIMUM_QPID_PROXY_PORT
RT_PROXY_PORT_MIN
RT_PROXY_PORT_MAX
SMI_PROXY_MINIMUM_ZOOKEEPER_PROXY_PORT
SMI_PROXY_MAXIMUM_ZOOKEEPER_PROXY_PORT
6.
The following example defines custom values for the Cassandra ports:
7. SMI_PROXY_MINIMUM_CASSANDRA_PROXY_PORT=10142
8. SMI_PROXY_MAXIMUM_CASSANDRA_PROXY_PORT=10143
9. Save the configuration file.
10. Install apigee-mtls as described in Install Apigee mTLS.
11. Configure the apigee-mtls component by using the following command:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls setup -f config_file
13. Repeat these steps for each node in your cluster so that all configuration files
are the same across all nodes.
Before you can use apigee-monit with mTLS, you must change the owner of the
tool to root. To do this, execute the following commands on each node in the
cluster:
For additional information about using monit (the tool on which apigee-monit is
based) with Consul, see:
● https://mmonit.com/monit/documentation/monit.html
● https://mmonit.com/wiki/
● Consul Checks
● Consul Server Health API
TERMINOLOGY: When configuring mTLS for multiple data centers, each data center is
considered a region.
The installation process for mTLS on a multi-data center topology is the same as it is
for simpler topologies. However, you must ensure that your installation meets the
prerequisites and that you change your configuration files as described in the
sections that follow.
Prerequisites
To use Apigee mTLS with multiple data centers, you create a separate configuration
file for each data center.
1. Change the value of the ALL_IP configuration property to include all host IP
addresses in all regions.
2. Ensure that the value of the REGION property is the name of the current region
or data center. For example, "dc-1".
3. Add the following properties:
Property Description
APIGEE_ Whether or not you are using a multi-data
MTLS_ center configuration. Set to "y" if you are
MULTI configuring multiple data centers.
Otherwise, omit or set to "n". The default is
_DC_E
omitted.
NABLE
The following examples show the configuration files for two data centers ("dc-1" and
"dc-2"). Properties that are specific to a multi-data center configuration are
highlighted):
LDAP_MTLS_HOSTS="10.126.0.114 10.126.0.106"
PG_MTLS_HOSTS="10.126.0.104 10.126.0.112"
MS_MTLS_HOSTS="10.126.0.114 10.126.0.106"
ENABLE_SIDECAR_PROXY="y"
ENCRYPT_DATA="zRNQ9lhRySNTfegiLLLfIQ=="
PATH_TO_CA_CERT="/opt/consul-agent-ca.pem"
PATH_TO_CA_KEY="/opt/consul-agent-ca-key.pem"
APIGEE_MTLS_MULTI_DC_ENABLE="y"
REGION="dc-1"
MTLS_REMOTE_REGION_1_NAME="dc-2"
For information about the standard configuration properties, see Step 1: Update your
configuration file.
The following examples show sample output from a raft list-peers command:
Apigee mTLS has been tested on two data centers. You can, however, specify
configurations of up to 17 data centers by using the following properties:
● MTLS_REMOTE_REGION_[1-17]_IP
● MTLS_REMOTE_REGION_[1-17]_NAME
●
1. Log in to a node in your cluster. The order in which you do this does not
matter.
2. Stop all components on the node, as the following example shows:
3. /opt/apigee/apigee-service/bin/apigee-all stop
4. Uninstall the apigee-mtls service by executing the following command:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls uninstall
6. Start all components on the node in the start order, as the following example
shows:
7. /opt/apigee/apigee-service/bin/apigee-service component_name start
8. Repeat this process for each node in the cluster.
To verify that the uninstallation was successful, you can do the following (in any
order):
1. On each node that is running ZooKeeper, check that the Consul services are
not in the /usr/lib/systemd/system directory:
● Change to the /usr/lib/systemd/system directory:
● cd /usr/lib/systemd/system
● Ensure that the following files are not in that directory:
● consul_egress.service
● consul_server.service
● If either of these files is in the /usr/lib/systemd/system directory,
delete it.
2. On each node that is running ZooKeeper, check to see if the apigee-mtls
and apigee-mtls-consul directories exist:
● Change to the Apigee root directory:
● cd ${APIGEE_ROOT:-/opt/apigee}
● Check the contents of the directory:
● ls
● Ensure that the following directories do not exist in this directory:
● apigee-mtls-version
● apigee-mtls-consul-version
● If either of these directories exist, delete them.
3. In the same directory, ensure that symlinks to the following have been
removed:
● apigee-mtls
● apigee-mtls-consul
4. To do this, use the find -L option, as the following example shows:
5. find -L ./
6. If symbolic links to these directories remain, you can remove them with either
the rm or unlink commands.
7. On each node that is running ZooKeeper, check that Consul has been
removed by using the which command:
8. which consul
9. This command should respond with a message similar to the following:
10. "/usr/bin/which: no consul in (...:/opt/apigee/apigee-adminapi-version/bin:...)"
11. Execute the following command as root or with sudo:
12. iptables -t nat -L OUTPUT
13. This command should display column headings but no data in the columns,
as the following example shows:
14. target prot opt source destination
15. Use yum to determine if the Apigee mTLS packages are installed:
16. yum list installed
17. This command should not display any packages matching the following:
● apigee-mtls-version
● apigee-mtls-consul-version
This process consists of the following steps, which you must perform on each node:
1. Create the private key for the node. Each node must have a unique private
key.
2. Create the signature config for the node. Each node must have its own
signature configuration file.
3. Build the request by converting the signature configuration file into a
signature request file.
4. Sign the request so that you can get a local key/cert pair for the node.
5. Integrate all the key/cert pairs with your nodes.
Apigee recommends that you perform steps 1 through 4 for all nodes, and then
perform step 5 for all nodes, rather than walking through all 5 steps for each node,
one at a time.
GET STARTED!
To create a private key for a node, execute the following command on each node:
For example:
This command creates an RSA private key file named local_key.pem which has
no public key component.
Next Step
1 NEXT: (2) CREATE THE SIGNATURE CONFIG 3 4 5
The following example shows the syntax for a signature configuration file:
[req]
distinguished_name = req_distinguished_name
attributes = req_attributes
prompt = no
[ req_distinguished_name ]
C=COUNTRY_NAME
ST=STATE_NAME
L=CITY_NAME
O=ORG_OR_BUSINESS_NAME
OU=ORG_UNIT
CN=ORG_DEPARTMENT
[ req_attributes ]
[ cert_ext ]
subjectKeyIdentifier=hash
keyUsage=critical,keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
[alt_names]
DNS.1=localhost
DNS.2=ipv4-localhost
DNS.3=ipv6-localhost
DNS.4=cli.dc-1.consul
DNS.5=client.dc-1.consul
DNS.6=server.dc-1.consul
DNS.7=FQDN
# ADDITIONAL definitions, as needed:
DNS.8=ALT_FQDN_1
DNS.9=ALT_FQDN_2
The following table describes the properties in the signature configuration file:
Required
Property Description
?
...
[alt_names]
DNS.1=localhost
DNS.2=ipv4-localhost
DNS.3=ipv6-localhost
DNS.4=cli.dc-1.consul
DNS.5=client.dc-1.consul
DNS.6=server.dc-1.consul
DNS.7=FQDN
...
hostname --fqdn
IP.[1...] Set IP.1 to the valid IPv4 address that every member
of the cluster (including cross-data center traffic)
observes this node as.
Next Step
1 2 NEXT: (3) BUILD THE REQUEST 4 5
openssl req \
-key LOCAL_PRIVATE_KEY \
-config SIGNATURE_CONFIG_FILE \
-new \
-nodes \
-days NUMBER_OF_DAYS \
-out SIG_REQUEST_FILE
Where:
● LOCAL_PRIVATE_KEY is the path to the node's local private key that you
created in Step 1: Create the local private key.
● SIGNATURE_CONFIG_FILE is the path to the node's file that you created in
Step 2: Create the local signature config file.
● NUMBER_OF_DAYS is the number of days for which the certificate is good.
The default is 30.
● SIG_REQUEST_FILE is the output location for the signature request file that
this command creates.
This command creates a *.csr file with the name you specified.
Next Step
Where:
Signature ok
subject=C = US, ST = CA, L = San Jose, O = Google, OU = Google-Cloud, CN =
Apigee
Getting CA Private Key
user@host:~/certificate_example$ ls
Your custom certificate/key pair is good for 365 days by default. You can configure
the number of days by using the APIGEE_MTLS_NUM_DAYS_CERT_VALID_FOR
property, as described in Step 1: Update your configuration file.
Next Step
1 2 3 4 NEXT: (5) INTEGRATE
The standard installation for Apigee mTLS is to perform the following general steps:
For custom certificate installation, you must perform additional steps, as described
in this section.
To integrate your custom certificates with Apigee mTLS, you copy the following files
to both the /certs and /source directories on each node in the cluster. You do
this during the installation:
For example, the installation steps for Apigee mTLS with a custom certificate are as
follows:
NOTE: Be sure to mimic the permissions and ownership of the original files you are replacing. As
a result, if you replace a file whose owner is the apigee:apigee user, then set that ownership
to be the same for the new file.
This process overrides the certificates that were generated during the initial setup.
After you complete the integration of the new certificates, you can verify that they are
valid by using the instructions in Verify the certificate.
Verify the certificate
You can inspect the certificates on each node to ensure that the names you added
are present in the final output.
The list of names and IP addresses should match those that you added in Step 2:
Create the local signature config file.
Certificate errors
ZooKeeper hosts might experience blocks when executing the following commands:
If you find instances of the 509 error shown in the response, then multiple CAs are
being used and the Consul servers are failing to verify one another. In this case,
Apigee recommends that you uninstall apigee-mtls and then reinstall it on all
nodes in the cluster. For more information, see Uninstall Apigee mTLS.
SCALING
Adding a Router or Message Processor
node
You can add a Router or Message Processor node to an existing installation. For a
list of the system requirements for a Router or Message Processor, see Installation
Requirements.
Add a Router
After you install Edge on the node, use the following procedure to add the Router:
1. Install Edge on the node using the internet or non internet procedure as
described in the Edge Installation manual.
2. At the command prompt, run the apigee-setup.sh script:
3. /opt/apigee/apigee-setup/bin/setup.sh -p r -f configFile
4. The -p r option specifies to install the Router. See Install Edge components
on a node for information on creating a configFile.
5. When the installation completes, the script displays the UUID of the Router. If
you need to determine the UUID later, use the following cURL command on
the host where you installed the Router:
6. curl http://router_IP:8081/v1/servers/self
7. If you are using Cassandra authentication, enable the Router to connect to
Cassandra:
8. /opt/apigee/apigee-service/bin/apigee-service edge-router
store_cassandra_credentials -u username -p password
9. For more information, see Enable Cassandra authentication.
10. To check the configuration, you can run the following curl command:
11. curl -v -u adminEmail:pword "http://ms_IP:8080/v1/servers?pod=pod_name"
12. Where pod_name is gateway or your custom pod name. You should see the
UUIDs of all Routers, including the Router that you just added.
If the Router UUID does not appear in the output, run the following cURL
command to add it:
curl -v -u adminEmail:pword \
-X POST http://ms_IP:8080/v1/regions/region_name/pods/pod_name/servers \
13. -d "action=add&uuid=router_UUID&type=router"
14. Replace ms_IP with the IP address of the Management Server, region_name
with the default region name of dc-1 or your custom region name, and
pod_name with gateway or your custom pod name.
15. To test the router, you should be able to make requests to your APIs through
the IP address or DNS name of the Router. For example:
16. http://newRouter_IP:port/v1/apiPath
17. For example, if you completed the first tutorial where you created the weather
API:
18. http://newRouter_IP:port/v1/weather/forecastrss?w=12797282
1. Install Edge on the node using the internet or non internet procedure as
described in the Edge Installation manual.
2. At the command prompt, run the apigee-setup.sh script:
3. /opt/apigee/apigee-setup/bin/setup.sh -p mp -f configFile
4. The -p mp option specifies to install the Message Processor. See Install Edge
components on a node for information on creating a configFile.
5. When the installation completes, the script displays the UUID of the Message
Processor. Note that UUID as you need it to complete the configuration
process. If you need to determine the UUID, use the following curl command
on the host where you installed the Message Processor:
6. curl http://mp_IP:8082/v1/servers/self
7. For each environment in each organization in your installation, use the
following curl command to associate the Message Processor with the
environment:
curl -v -u adminEmail:pword \
-H "Content-Type: application/x-www-form-urlencoded" -X POST
"http://ms_IP:8080/v1/o/org_name/e/env_name/servers" \
8. -d "action=add&uuid=mp_UUID"
9. Replace ms_IP with the IP address of the Management Server and org_name
and env_name with the organization and environment associated with the
Message Processor.
10. To check the configuration, you can run the following curl command:
curl -v -u adminEmail:pword \
11. "http://ms_IP:8080/v1/o/org_name/e/env_name/servers"
12. Where org_name is the name of your organization, and env_name is the
environment. You should see the UUIDs of all Message Processors
associated with the organization and environment, including the Message
Processor that you just added.
13. If you are using Cassandra authentication, enable the Message Processor to
connect to Cassandra:
14. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
store_cassandra_credentials -u username -p password
15. For more information, see Enable Cassandra authentication.
This section describes how to remove the following components from a Private
Cloud installation:
● Management Server
● Message Processor
● Router
For information on how to remove a Qpid server, see Remove a Qpid server.
2. -d "uuid=uuid®ion=regionName&pod=podName&action=remove"
3. Deregister the server's type:
curl -u adminEmail:pWord
-X POST http://management_IP:8080/v1/servers
4. -d "type=[message-processor|router|management-
server]®ion=regionName&pod=podName&uuid=uuid&action=remove"
5. Delete the server:
6. curl -u adminEmail:pWord -X DELETE
http://management_IP:8080/v1/servers/uuid
7. (Message Processor only) After deregistering a Message Processor, restart
the routers:
8. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
While you can add one or two Cassandra nodes to an existing Edge installation,
Apigee recommends that you add three nodes at a time.
For a list of the system requirements for a Cassandra node, see Installation
requirements.
IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=secret
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
SMTPPASSWORD=smtppwd
Note that the REGION property specifies the region name as "dc-1". You need that
information when adding the new Cassandra nodes.
● 10.10.0.14
● 10.10.0.15
● 10.10.0.16
You must first update Edge configuration file to add the new nodes:
IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
# Add the new node IP addresses.
IP14=10.10.0.14
IP15=10.10.0.15
IP16=10.10.0.16
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
...
# Update CASS_HOSTS to add each new node after an existing nodes.
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP14:1,1 $IP2:1,1 $IP15:1,1 $IP3:1,1 $IP16:1,1"
Note: Add each new Cassandra node to CASS_HOSTS after an existing node.
This ensure that the existing nodes retain their initial token settings, and the initial
token of each new node is between the token values of the existing nodes.
Configure Edge
After editing the config file, you must:
1. Rerun the setup.sh with the "-p c" profile and the new config file:
2. /opt/apigee/apigee-setup/bin/setup.sh -p c -f updatedConfigFile
Note: Cassandra uses the fixed data-center names dc-1, dc-2, and so on, irrespective of the
region names specified in the REGION property of the configuration file for the regions in which
you install the new nodes. In step 2 of the following procedure, you must use Cassandra's data
center names (dc-1, dc-2, etc.) in the commands.
On a Management-Server node
1. Rerun setup.sh to update the Management Server for the newly added
Cassandra nodes:
2. /opt/apigee/apigee-setup/bin/setup.sh -p ms -f updatedConfigFile
After you add a new node, you can use the nodetool cleanup command on the
pre-existing nodes to free up disk space. This command clears configuration tokens
that are no longer owned by the pre-existing Cassandra node.
To free up disk space on pre-existing Cassandra nodes after adding a new node,
execute the following command:
You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
Verify rebuild
Use the following commands to verify that the rebuild was successful:
This command should indicate MODE: Normal when the node is up and the indexes
are built.
Should indicate that the thrift server is running, which allows Cassandra to accept
new client requests.
Should indicate that the native transport (or binary protocol) is running.
Should show new nodes are using the same schema version as the older nodes.
For more information on using nodetool, see the nodetool usage documentation.
AWS best practices
This section summarizes our best practices and provides our recommendations for
using OPDK with AWS cloud.
Cassandra is used as a backend and datastore for almost all the policies and is a
critical part of the Apigee Edge runtime environment. This document focuses on
optimizing Casssandra for the AWS environment.
Many instances in AWS EC2 comes with local storage in which the hard drive is
physically attached to the hardware that the EC2 instance is hosted on. Apigee
recommends leveraging both SSD and instance stores when running Cassandra in
production. When you use an instance type with more than 1 SSD, you can use RAID0
to get more throughput and storage capacity.
Network requirements
Cassandra uses the Gossip protocol to exchange information with other nodes
about network topology. The use of Gossip plus the distributed nature of Cassandra
— which involves talking to multiple nodes for read and write operations — results in
a lot of data transfer through the network. Apigee recommends using Instance type
with at least 1Gbps network bandwidth and more than 1Gbps for production
systems.
Use a VPC with CIDR of /16. Since subnets in AWS cannot span across more than 1
AZ, Apigee recommends the following:
Apigee recommends using an instance family, which has a balance of both CPU and
memory. Specifically, we recommend using C5 family instances if available in your
AWS region and C3 as a fall back option. In some cases, 4xlarge is the optimal
instance in both families that provides the best price/performance.
Apigee also recommends using a default tenancy for Cassandra instances. When
you scale to more than 1 instance per AZ, most likely all your Cassandra instances
will be placed on the same underlying hardware if you set tenancy to be dedicated.
So, when hardware fails, you will likely lose all your instances in that AZ.
Recommendation summary
The following table summarizes the Apigee recommendations for using AWS with
Apigee Edge for Private Cloud:
You can add one or two ZooKeeper nodes to an existing Edge installation, however,
you must make sure that you always have an odd number of ZooKeeper voter nodes,
as described below.
IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123
LICENSE_FILE=/tmp/license.txt
MSIP=$IP1
USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=1
APIGEE_LDAPPW=secret
MP_POD=gateway
REGION=dc-1
ZK_HOSTS="$IP1 $IP2 $IP3"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3"
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1"
SKIP_SMTP=n
SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com
SMTPPASSWORD=smtppwd
Where:
● 10.10.0.14
● 10.10.0.15
● 10.10.0.16
You must first update Edge configuration file to add the new nodes:
IP1=10.10.0.1
IP2=10.10.0.2
IP3=10.10.0.3
# Add the new node IP addresses.
IP14=10.10.0.14
IP15=10.10.0.15
IP16=10.10.0.16
HOSTIP=$(hostname -i)
ADMIN_EMAIL=opdk@google.com
...
# Update ZK_HOSTS to add each new node after an existing nodes.
ZK_HOSTS="$IP1 $IP2 $IP3 $IP14 $IP15 $IP16:observer"
# Update ZK_Client_HOSTS to add each new node after an existing nodes.
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3 $IP14 $IP15 $IP16"
Mark the last node in ZK_HOSTS with the with :observer modifier. Nodes without
the :observer modifier are called "voters". You must have an odd number of
"voters" in your configuration. Therefore, in this configuration, you have 5 ZooKeeper
voters and one observer.
Note: While you can configure three observers and three voters, Apigee recommends that you
use five voters.
Make sure to add the nodes to both ZK_HOSTS and ZK_CLIENT_HOSTS in the same
order. However, omit the :observer modifier when setting ZK_CLIENT_HOSTS.
Configure Edge
After editing the config file, you must perform all of the following tasks.
To validate:
By default, the name of the analytics group is axgroup-001, and the name of
the consumer group is consumer-group-001. In the silent config file for a
region, you can set the name of the analytics group by using the AXGROUP
property.
If you are unsure of the names of the analytics and consumer groups, use the
following command to display them:
2. apigee-adminapi.sh analytics groups list --admin adminEmail --pwd
adminPword --host localhost
3. This command returns the analytics group name in the name field, and the
consumer group name in the consumer-groups field.
4. Remove Qpid from the consumer group:
curl -u adminEmail:pword -H "Content-Type: application/json"
5. -X DELETE
"http://ms_IP:8080/v1/analytics/groups/ax/AX_GROUP/consumer-groups/
CONSUMER_GROUP/consumers/QPID_UUID"
6. Remove Qpid from the analytics group:
curl -v -u adminEmail:pword
7. -X DELETE "http://ms_IP:8080/v1/analytics/groups/ax/AX_GROUP/servers?
uuid=QPID_UUID&type=qpid-server"
8. Deregister the Qpid server from the Edge installation:
curl -u adminEmail:pword
Note: If you require three or more data centers, please contact your Sales Rep or Account
Executive to engage with Apigee.Caution: If you need to update Apigee Edge as well as add a
data center, you must perform these actions in separate procedures. Complete one procedure
before starting the other.
● OpenLDAP
Each data center has its own OpenLDAP server configured with replication
enabled. When you install the new data center, you must configure OpenLDAP
to use replication, and you must reconfigure the OpenLDAP server in the
existing data center to use replication.
● ZooKeeper
For the ZK_HOSTS property for both data centers, specify the IP addresses or
DNS names of all ZooKeeper nodes from both data centers, in the same order,
and mark any nodes with the with “:observer” modifier. Nodes without the
:observer modifier are called "voters". You must have an odd number of
"voters" in your configuration.
In this topology, the ZooKeeper host on host 9 is the observer:
In the example configuration file shown below, node 9 is tagged with the
:observer modifier so that you have five voters: Nodes 1, 2, 3, 7, and 8.
For the ZK_CLIENT_HOSTS property for each data center, specify the IP
addresses or DNS names of only the ZooKeeper nodes in the data center, in
the same order, for all ZooKeeper nodes in the data center.
● Cassandra
All data centers must to have the same number of Cassandra nodes.
For CASS_HOSTS for each data center, ensure that you specify all Cassandra
IP addresses (not DNS names) for both data centers. For data center 1, list
the Cassandra nodes in that data center first. For data center 2, list the
Cassandra nodes in that data center first. List the Cassandra nodes in the
same order for all Cassandra nodes in the data center.
All Cassandra nodes must have a suffix ':d,r'; for example 'ip:1,1 = data center
1 and rack/availability zone 1 and 'ip:2,1 = data center 2 and rack/availability
zone 1.
For example, "192.168.124.201:1,1 192.168.124.202:1,1 192.168.124.203:1,1
192.168.124.204:2,1 192.168.124.205:2,1 192.168.124.206:2,1"
The first node in rack/availability zone 1 of each data center will be used as
the seed server. In this deployment model, Cassandra setup will look like this:
● Postgres
By default, Edge installs all Postgres nodes in master mode. However, when
you have multiple data centers, you configure Postgres nodes to use master-
standby replication so that if the master node fails, the standby node can
continue to server traffic. Typically, you configure the master Postgres server
in one data center, and the standby server in the second data center.
If the existing data center is already configured to have two Postgres nodes
running in master/standby mode, then as part of this procedure, deregister the
existing standby node and replace it with a standby node in the new data
center.
The following table shows the before and after Postgres configuration for
both scenarios:
Before After
● Port requirements
You must ensure that the necessary ports are open between the nodes in the
two data centers. For a port diagram, see Port requirements.
# Datacenter 1 # Datacenter 2
IP1=IPorDNSnameOfNode1 IP1=IPorDNSnameOfNode1
IP2=IPorDNSnameOfNode2 IP2=IPorDNSnameOfNode2
IP3=IPorDNSnameOfNode3 IP3=IPorDNSnameOfNode3
IP7=IPorDNSnameOfNode7 IP7=IPorDNSnameOfNode7
IP8=IPorDNSnameOfNode8 IP8=IPorDNSnameOfNode8
IP9=IPorDNSnameOfNode9 IP9=IPorDNSnameOfNode9
HOSTIP=$(hostname -i) HOSTIP=$(hostname -i)
MSIP=$IP1 MSIP=$IP7
ADMIN_EMAIL=opdk@google.com ADMIN_EMAIL=opdk@google.com
APIGEE_ADMINPW=Secret123 APIGEE_ADMINPW=Secret123
LICENSE_FILE=/tmp/license.txt LICENSE_FILE=/tmp/license.txt
USE_LDAP_REMOTE_HOST=n USE_LDAP_REMOTE_HOST=n
LDAP_TYPE=2 LDAP_TYPE=2
LDAP_SID=1 LDAP_SID=2
LDAP_PEER=$IP7 LDAP_PEER=$IP1
APIGEE_LDAPPW=secret APIGEE_LDAPPW=secret
MP_POD=gateway-1 MP_POD=gateway-2
REGION=dc-1 REGION=dc-2
ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8 ZK_HOSTS="$IP1 $IP2 $IP3 $IP7 $IP8
$IP9:observer" $IP9:observer"
ZK_CLIENT_HOSTS="$IP1 $IP2 $IP3" ZK_CLIENT_HOSTS="$IP7 $IP8 $IP9"
# Must use IP addresses for # Must use IP addresses for
CASS_HOSTS, not DNS names. CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP2:1,1 CASS_HOSTS="$IP7:2,1 $IP8:2,1
$IP3:1,1 $IP7:2,1 $IP8:2,1 $IP9:2,1" $IP9:2,1 $IP1:1,1 $IP2:1,1 $IP3:1,1"
SKIP_SMTP=n SKIP_SMTP=n
SMTPHOST=smtp.example.com SMTPHOST=smtp.example.com
SMTPUSER=smtp@example.com SMTPUSER=smtp@example.com
SMTPPASSWORD=smtppwd SMTPPASSWORD=smtppwd
SMTPSSL=n SMTPSSL=n
SMTPPORT=25 SMTPPORT=25
SMTPMAILFROM="My Company SMTPMAILFROM="My Company
<myco@company.com>" <myco@company.com>"
Note: Cassandra uses the fixed data-center names dc-1, dc-2, and so on, irrespective of the
region names specified in the REGION property of the configuration file for the regions in which
you install the new nodes. In the following procedure, you must use Cassandra's data center
names (dc-1, dc-2, etc.) in the commands.
PG_MASTER=IPorDNSofDC1Master
b. PG_STANDBY=IPorDNSofDC2Standby
c. Enable replication on the new master:
d. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
setup-replication-on-master -f configFIle
e. On the standby node in dc-2, edit the config file to set:
PG_MASTER=IPorDNSofDC1Master
f. PG_STANDBY=IPorDNSofDC2Standby
g. On the standby node in dc-2, stop the server and then delete any
existing Postgres data:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
h. rm -rf /opt/apigee/data/apigee-postgresql/
i. If necessary, you can backup this data before deleting it.
j. Configure the standby node in dc-2:
k. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
setup-replication-on-standby -f configFile
22. On dc-1, update the analytics configuration and configure the organizations.
a. On the Management Server node of dc-1, get the UUID of the Postgres
node:
If you are upgrading your operating system, dc-2 could be the data center in which
you have installed the new version of the operating system (OS). However, installing
a new OS isn't required to decommission a data center.
Note: The decommissioning produre described below is for two data centers. If you have more
than two data centers, Apigee recommends that you work with a Customer Success partner to
decommission a data center.
● Block all runtime and management traffic to the data center being
decommissioned and redirect them to other data centers.
● After decommissioning the data center, you will have reduced capacity in your
Apigee cluster. To make up for it, consider increasing capacity in the
remaining data centers or adding data centers after decommissioning.
● During the decommission process, there is a potential for analytics data loss,
depending on which analytics components are installed in the data center
being decommissioned. You can find more details in Add or remove Qpid
nodes.
● Before you decommission a data center, you should understand how all the
components are configured across all data centers, especially the OpenLDAP,
ZooKeeper, Cassandra, and Postgres servers. You should also take backups
of all components and their configurations.
● If you restart a component after deregistering it, without uninstalling the component
completely, the component will register itself again Edge. You must uninstall the
component completely or isolate the node network once decommissioned.
● Uninstalling a component will not remove data from /opt/apigee/data.
● During the decommissioning process you will be required to reconfigure multiple Edge
components. While an Edge component is being reconfigured, that specific component
on that specific node will be unavailable.
● Management Server: All the decommission steps are highly dependent on the
Management Server. If you have only one Management Server available, we
recommend that you install a new Management Server component on a data
center other than dc-1 before decommissioning the Management Server on
dc-1, and make sure that one of the Management Servers is always available.
● Router: Before decommissioning a Router, disable reachability of Routers by
blocking the port 15999. Ensure no runtime traffic is being directed at the
Routers being decommissioned.
● Cassandra and ZooKeeper: The sections below describe how to
decommission dc-1 in a two data center setup. If you have more than two
data centers, make sure to remove all references to the node being
decommissioned (dc-1 in this case) from all silent configuration files across
all the remaining data centers. For Cassandra nodes that are to be
decommissioned, drop those hosts from CASS_HOSTS. The remaining
Cassandra nodes should remain in the original ordering of CASS_HOSTS.
● Postgres: If you decommission Postgres master, make sure to promote any
one of the available standby nodes as a new postgres master. While the QPID
server keeps a buffer in the queue, if Postgres master is unavailable for a
longer time, you risk losing analytics data.
Caution: You must remove and add Postgresql servers from the analytics group and the
consumer group at the same time. You might lose some analytics data if there is a
substantial delay between the two steps.
Suggestion: The Message Processor has a small queue where it keeps analytics data in
memory for a brief time, which you can usw to help avoid loss of data. Before you make
changes to the analytics group, block port 5672 in the data center that is taking traffic,
and make sure that you quickly re-enable this port after making the changes, as there is a
chance of data overflow.
For example, to temporarily stop analytics traffic from Message Processors to the Qpid
server:
1. Enter
2. iptables -A INPUT -p tcp --destination-port 5672 ! -s `hostname` -i eth0 -j DROP
3. Follow the axgroup change steps.
4. Remove the iptable rule after axgroup changes:
5. iptables -F
●
Prerequisites
● Before decommissioning any component, we recommend that you perform a
complete backup of all nodes. Use the procedure for your current version of
Edge to perform the backup. For more information on backup, see Backup
and restore.
Note: If you have multiple Cassandra or ZooKeeper nodes, back them up one
at a time, as the backup process temporarily shuts down ZooKeeper.
● Ensure that Edge is up and running before decommissioning, using the
command:
● /opt/apigee/apigee-service/bin/apigee-all status
● Make sure that no runtime traffic is currently arriving at the data center you
are decommissioning.
If you install Edge for Private Cloud on multiple nodes, you should decommission
Edge components on those nodes in the following order:
1. Edge UI (edge-ui)
2. Management Server (edge-management-server)
3. OpenLDAP (apigee-openldap)
4. Router (edge-router)
5. Message Processor (edge-message-processor)
6. Qpid Server and Qpidd (edge-qpid-server and apigee-qpidd)
7. Postgres and PostgreSQL database (edge-postgres-server and apigee-
postgresql)
8. ZooKeeper (apigee-zookeeper)
9. Cassandra (apigee-cassandra)
Edge UI
To stop and uninstall the Edge UI component of dc-1, enter the following commands:
Management Server
Caution: If you have only one Management Server available, we recommend that you install a
new Management Server component on a data center other than dc-1 before decommissioning
the Management Server on dc-1, and make sure that one of the Management Servers is always
available.
4. -X GET “http://{MS_IP}:8080/v1/servers?pod=central®ion=dc-
1&type=management-server”
5. Deregister server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://{MS_IP}:8080/v1/servers \
6. -d "type=management-server®ion=dc-
1&pod=central&uuid=UUID&action=remove"
7. Delete the server. Note: If other components are also installed on this server,
deregister all of them first before deleting the UUID.
8. curl -u <AdminEmailID>:'<AdminPassword> -X DELETE
http://{MS_IP}:8080/v1/servers/{UUID}
9. Uninstall Management Server component on dc-1:
10. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
uninstall
Open LDAP
This section explains how to decommission OpenLDAP on dc-1.
Note: If you have more then two data centers, see Setups with more than two data
centers below.
1. Back up the dc-1 OpenLDAP node by following the steps in How to back up.
2. Break the data replication between the two data centers, dc-1 and dc-2, by
executing the following steps in both data centers.
Note: If your setup has more than two data centers, you need to:
a. Identify which OpenLDAP node is replicating to the node being decommissioned.
b. Change the replication for that node to replicate to a different node to complete
the replication circle.
3. For example, suppose you have three data centers, in which node 1 replicates to node 2,
node 2 replicates to node 3, and node 3 replicates to node 1. If you are decommissioning
node 1, you need to break the replication on node 1 and also modify replication for node
3 to replicate to node 2.
a. Check the present state:
b. ldapsearch -H ldap://{HOST}:{PORT} -LLL -x -b "cn=config" -D
"cn=admin,cn=config" -w {credentials} -o ldif-wrap=no 'olcSyncRepl' |
grep olcSyncrepl
c. The output should be similar to the following:
d. olcSyncrepl: {0}rid=001 provider=ldap://{HOST}:{PORT}/
binddn="cn=manager,dc=apigee,dc=com" bindmethod=simple
credentials={credentials} searchbase="dc=apigee,dc=com" attrs="*,+"
type=refreshAndPersist retry="60 1 300 12 7200 +" timeout=1
e. Create a file break_repl.ldif containing the following commands:
dn: olcDatabase={2}bdb,cn=config
changetype: modify
delete: olcSyncRepl
dn: olcDatabase={2}bdb,cn=config
changetype: modify
f. delete: olcMirrorMode
g. Run the ldapmodify command:
h. ldapmodify -x -w {credentials} -D "cn=admin,cn=config" -H
"ldap://{HOST}:{PORT}/" -f path/to/file/break_repl.ldif
i. The output should be similar to the following:
dn: uid=readonly-user,ou=users,ou=global,dc=apigee,dc=com
objectClass: organizationalPerson
objectClass: person
objectClass: inetOrgPerson
objectClass: top
cn: readonly-user
sn: readonly-user
b. userPassword: {testPassword}
c. Add user with `ldapadd` command in dc-2:
d. ldapadd -H ldap://{HOST}:{PORT} -w {credentials} -D
"cn=manager,dc=apigee,dc=com" -f path/to/file/readonly-user.ldif
e. The output will be similar to:
f. adding new entry "uid=readonly-
user,ou=users,ou=global,dc=apigee,dc=com"
g. Search for the user in dc-1 to make sure the user is not replicated. If
the user is not present in dc-1, you will be sure that both LDAPs are no
longer replicating:
h. ldapsearch -H ldap://{HOST}:{PORT} -x -w {credentials} -D
"cn=manager,dc=apigee,dc=com" -b uid=readonly-
user,ou=users,ou=global,dc=apigee,dc=com -LLL
i. The output should be similar to the following:
Router
This section explains how to decommission a Router. See Remove a server for more
details about removing the Router.
Note: Before decommissioning the Router, disable reachability of Routers by blocking the port
15999. Ensure no runtime traffic is being directed at the Routers being decommissioned.
The following steps decommission the Router from dc-1. If there are multiple Router
nodes configured in dc-1, perform the steps in all Router nodes one at a time
Note: Here it is assumed that the router's health-check port 15999 is configured in
your load balancer, and that blocking port 15999 will make router unreachable. You
may need root access to block the port.
1. Disable the reachability of routers by blocking port 15999, the health check
port. Ensure that runtime traffic is blocked on this data center:
2. iptables -A INPUT -i eth0 -p tcp --dport 15999 -j REJECT
3. Verify that the router is reachable:
4. curl -vvv -X GET http://{ROUTER_IP}:15999/v1/servers/self/reachable
5. The output should be similar to the following:
About to connect() to 10.126.0.160 port 15999 (#0)
Trying 10.126.0.160...
Connection refused
Failed connect to 10.126.0.160:15999; Connection refused
Closing connection 0
14. -d "type=router®ion=dc-1&pod=gateway-1&uuid=UUID&action=remove"
15. Deregister the server:
16. curl -u <AdminEmailID>:'<AdminPassword>’ -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
17. Uninstall edge-router:
18. /opt/apigee/apigee-service/bin/apigee-service edge-router uninstall
19. See Remove a server.
20. Flush iptables rules to enable the blocked port 15999:
21. iptables -F
Message Processor
This section describes how to decommission the Message Processor from dc-1. See
Remove a server for more details about removing the Message Processor.
Since we are assuming that dc-1 has a 12-node clustered installation, there are two
Message Processor nodes configured in dc-1. Perform the following commands in
both the nodes.
5. -d "type=message-processor®ion=dc-1&pod=gateway-
1&uuid=UUID&action=remove"
6. Disassociate an environment from the Message Processor.
Note: You need to remove the bindings on each org/env that associates the
Message Processor UUID.
curl -H "Content-Type:application/x-www-form-urlencoded" -u <AdminEmailID>:'' \
-X POST http://{MS_IP}:8080/v1/organizations/{ORG}/environments/{ENV}/servers \
7. -d "action=remove&uuid=UUID"
8. Deregister the server’s type:
9. curl -u :'' -X POST http://MS_IP:8080/v1/servers -d "type=message-
processor®ion=dc-1&pod=gateway-1&uuid=UUID&action=remove"
10. Uninstall the Message Processor:
11. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
uninstall
12. Deregister the server:
13. curl -u <AdminEmailID>:'<AdminPassword> -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
9. -X DELETE "http://{MS_IP}:8080/v1/analytics/groups/ax/{ax_group}/servers?
uuid={QPID_UUID}&type=qpid-server"
10. Deregister the Qpid server from the Edge installation:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://{MS_IP}:8080/v1/servers \
11. -d "type=qpid-server®ion=dc-
1&pod=central&uuid={QPID_UUID}&action=remove"
12. Remove the Qpid server from the Edge installation:
13. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
14. Restart all edge-qpid-server components on all nodes to make sure the
change is picked up by those components:
$ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
15. $ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server
wait_for_ready
16. Uninstall edge-qpid-server and apigee-qpidd:
$ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server uninstall
Suggestion: The Message Processor has a small queue where it keeps analytics data in memory
for a brief time, which you can usw to help avoid loss of data. Before you make changes to the
analytics group, block port 5672 in the data center that is taking traffic, and make sure that you
quickly re-enable this port after making the changes, as there is a chance of data overflow.
For example, to temporarily stop analytics traffic from Message Processors to the Qpid server:
1. Enter
2. iptables -A INPUT -p tcp --destination-port 5672 ! -s `hostname` -i eth0 -j DROP
3. Follow the axgroup change steps.
4. Remove the iptable rule after axgroup changes:
5. iptables -F
Note: If you decommission Postgres master, make sure to promote any one of the
available standby nodes as a new postgres master. While the QPID queeues buffer
data, if the Postgres master is unavailable for a long time, you risk losing analytics
data.
1. Back up the dc-1 Postgres master node by following the instructions in the
following links:
a. Backup and restore
b. How to back up
2. Get the UUIDs of the Postgres servers, as described in Get UUIDs.
3. On dc-1, stop edge-postgres-server and apigee-postgresql on the
current master:
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop
PG_MASTER=IP_or_DNS_of_new_PG_MASTER
b. PG_STANDBY=IP_or_DNS_of_PG_STANDBY
c. Enable replication on the new master:
d. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
setup-replication-on-master -f configFIle
8. If there is more than one standby node remaining
a. Add the following configuration in
/opt/apigee/customer/application/postgresql.properti
es:
b. conf_pg_hba_replication.connection=host replication apigee
standby_1_ip/32 trust \n host replication apigee standby_2_ip/32 trust
c. Ensure the file
/opt/apigee/customer/application/postgresql.properties is owned by
apigee user:
d. chown apigee:apigee
/opt/apigee/customer/application/postgresql.properties
e. Restart apigee-postgresql:
f. apigee-service apigee-postgresql restart
g. To update replication settings on a standby node:
i. Modify the configuration file /opt/silent.conf and update
the PG_MASTER field with the IP address of the new Postgres
master.
ii. Remove any old Postgres data with following command:
iii. rm -rf /opt/apigee/data/apigee-postgresql/
iv. Set up replication on the standby node:
v. /opt/apigee/apigee-service/bin/apigee-service apigee-
postgresql setup-replication-on-standby -f configFile
h. Verify that Postgres master is set up correctly by entering the following
command in dc-2:
i. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
postgres-check-master
j. Remove and add Postgresql servers from the analytics group and the
consumer group.Caution: Perform the next two steps together, as there may
be a loss of analytics data between the two steps.
i. Remove the old Postgres server from the analytics group
following the instructions in Remove a Postgres server from an
analytics group.
ii. Add a new postgres server to the analytics group following the
instructions in Add an existing Postgres server to an analytics
group.
k. Deregister the old postgres server from dc-1:
l. -d "type=postgres-server®ion=dc-
1&pod=analytics&uuid=UUID&action=remove"<
m. Delete the old postgres server from dc-1:
n. curl -u >AdminEmailID>:'>AdminPassword>' -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
o. The old Postgres master is now safe to decommission. Uninstall
edge-postgres-server and apigee-postgresql:
p. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
uninstall
9. Decommissioning Postgres standby
Note: The documentation for a 12-node clustered installation shows the dc-1
postgresql node as master, but for convenience, in this section it is assumed
that the dc-1 postgresql node is standby and the dc-2 postgresql node is
master.
To decommission Postgres standby, do the following steps:
a. Get the UUIDs of the Postgres servers, following the instructions in Get
UUIDs.
b. Stop apigee-postgresql on the current standby node in dc-1:
f. -d "type=postgres-server®ion=dc-
1&pod=analytics&uuid=UUID&action=remove"<
g. Delete the old postgres server from dc-1:
h. curl -u >AdminEmailID>:'>AdminPassword>' -X DELETE
http://{MS_IP}:8080/v1/servers/UUID
i. The old Postgres master is now safe to decommission. Uninstall
edge-postgres-server and apigee-postgresql:
j. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
uninstall
10. ZooKeeper and Cassandra
This section explains how to decommission ZooKeeper and Cassandra
servers in a two data center setup.
If you have more than two data centers, make sure to remove all references to
the node being decommissioned (dc-1 in this case) from all silent
configuration files across all the remaining data centers. For Cassandra nodes
that are to be decommissioned, drop those hosts from CASS_HOSTS. The
remaining Cassandra nodes should remain in the original ordering of
CASS_HOSTS.
Note on ZooKeeper: You must maintain a quorum of voter nodes while
modifying the ZK_HOST property in the configuration file, to ensure that the
ZooKeeper ensemble stays functional. You must have an odd number of voter
nodes in your configuration. For more information, see Apache ZooKeeper
maintenance tasks.
Caution: If you decommission a data center that has a leader ZooKeeper node, there will
be downtime.
To decommission ZooKeeper and Cassandra servers:
a. Back up the dc-1 Cassandra and ZooKeeper nodes by following the
instructions in the following links:
i. Backup and restore
ii. How to back up
b. Note: If you have multiple ZooKeeper nodes, back them up one at a time. The
backup process temporarily shuts down ZooKeeper.
c. List the UUIDs of ZooKeeper and Cassandra servers in the data center
where Cassandra nodes are about to be decommissioned.
d. apigee-adminapi.sh servers list -r dc-1 -p central -t application-
datastore --admin <AdminEmailID> --pwd '<AdminPassword>' --host
localhost
e. Deregister the server’s type:
f. curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://MS_IP:8080/v1/servers -d "type=cache-datastore&type=user-
settings-datastore&type=scheduler-datastore&type=audit-
datastore&type=apimodel-datastore&type=application-
datastore&type=edgenotification-datastore&type=identityzone-
datastore&type=user-settings-datastore&type=auth-datastore®ion=dc-
1&pod=central&uuid=UUID&action=remove"
g. Deregister the server:
h. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://MS_IP:8080/v1/servers/UUID
i. Update the configuration file with the IPs of decommissioned nodes
removed from ZK_HOSTS and CASS_HOSTS.Notes:
i. You must maintain a quorum of voter nodes while modifying the
ZK_HOST property in the configuration file, to ensure the ZooKeeper
ensemble stays functional. You must have an odd number of voter nodes
in your configuration. See About ZooKeeper and edge.
ii. For Cassandra nodes that are to be decommissioned, drop those hosts
from CASS_HOSTS. The remaining Cassandra nodes should remain in the
original ordering of CASS_HOSTS.
iii. ZooKeeper observer nodes do not participate in the voting and are not
counted when determining a quorum.
j. Example: Suppose you have the IPs $IP1 $IP2 $IP3 in dc-1 and
$IP4 $IP5 $IP6 in dc-2, and you are decommissioning dc-1. Then
you should remove the IPs $IP1 $IP2 $IP3 from the configuration
files.
i. Existing configuration file entries:
curl -v -u <AdminEmailID>:<AdminPassword> \
iv. -X GET
"http://MS_IP:8080/v1/regions/dc-1/pods/gateway-1/servers"
v. If this command doesn't return any UUIDs, the previous steps
have removed all the bindings, and you can skip the next step.
Otherwise, perform the next step.
vi. Remove all the server bindings for the UUIDs obtained in the
previous step:
vii. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://MS_IP:8080/v1/servers/UUID
viii.Disassociate Org from the pod:
ix. curl -v -u <AdminEmailID>:<AdminPassword>
"http://MS_IP:8080/v1/o/ORG/pods" -d
"action=remove®ion=dc-1&pod=gateway-1" -H "Content-Type:
application/x-www-form-urlencoded" -X POST
b. Delete the pods:
c. curl -v -u <AdminEmailID>:<AdminPassword>
"http://MS_IP:8080/v1/regions/dc-1/pods/gateway-1" -X DELETE
d. Delete the region.
e. curl -v -u <AdminEmailID>:<AdminPassword>
"http://MS_IP:8080/v1/regions/dc-1" -X DELETE
12. Note: If you miss one of the steps deleting the servers, the above step will
return an error message that a particular server in the pod still exists. So,
delete them by following the troubleshooting steps below, while customizing
the types in the curl command.
At this point you have completed the decommissioning of dc-1.
Appendix
Troubleshooting
If after performing the previous steps, there are still servers in some pods, do
following steps to deregister and delete the servers. Note: Change the types
and pod as necessary.
a. Get the UUIDs using the following command:
b. apigee-adminapi.sh servers list -r dc-1 -p POD -t --admin
<AdminEmailID> --pwd '<AdminPassword>’ --host localhost
c. Deregister server’s type:
d. curl -u <AdminEmailID>:'<AdminPassword>' -X POST
http://MP_IP:8080/v1/servers -d "type=TYPE=REGION=dc-
1&pod=POD&uuid=UUID&action=remove"
e. Delete the servers one by one:
f. curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE
http://MP_IP:8080/v1/servers/UUID
13. Validation
You can validate the decommissioning using the following commands.
Management Server
Run the following commands from the Management Servers on all the
regions.
curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?
pod=central®ion=dc-1
curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?
pod=gateway®ion=dc-1
OPERATIONS
Starting, stopping, restarting, and
checking the status of Apigee Edge
Stop/start order
The order of stopping and starting the subsystems is important. Start and stop
scripts are provided that take care of that for you for Edge components running on
the same node.
Stop order
If you install Edge on multiple nodes, then you should stop Edge components on
those nodes in the following stop order:
Start order
If you install Edge on multiple nodes, then you should start Edge components on
those nodes in the following start order:
1. Cassandra (apigee-cassandra)
2. OpenLDAP (apigee-openldap)
3. PostgreSQL database (apigee-postgresql)
4. Qpidd (apigee-qpidd)
5. ZooKeeper (apigee-zookeeper)
6. Management Server (edge-management-server)
7. Message Processor (edge-message-processor)
8. Postgres Server (edge-postgres-server)
9. Qpid Server (edge-qpid-server)
10. Router (edge-router)
11. Edge UI: edge-ui (classic) or edge-management-ui(new)
12. Edge SSO (apigee-sso)
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
For example, to start, stop, or restart the Management Server, run the following
commands:
You can also check the status of an individual Apigee component by using the
following command:
For example:
/opt/apigee/apigee-service/bin/apigee-all enable_autostart
/opt/apigee/apigee-service/bin/apigee-all disable_autostart
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
The script only affects the node on which you run it. If you want to configure all
nodes for autostart, run the script on all nodes.
Note: If you run into issues where the OpenLDAP server does not start automatically on reboot,
try disabling SELinux or setting it to permissive mode.
Implications:
Troubleshooting autostart
If you configure autostart, and Edge encounters issues with starting the OpenLDAP
server, you can try disabling SELinux or setting it to permissive mode on all nodes.
To configure SELinux:
1. Overwrite the existing license file with the new license file:
2. cp customer_license_file /opt/apigee/customer/conf/license.txt
3. Change ownership of the new license file to the "apigee" user:
chown apigee:apigee /opt/apigee/customer/conf/license.txt
Specify a different user if you are
Where ms_IP is the IP address or DNS name of the Management Server, and
podName is one of the following:
● gateway
● central
● analytics
[{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1101"
}, ... ]
},
"type" : [ "message-processor" ],
"uUID" : "276bc250-7dd0-46a5-a583-fd11eba786f8"
},
{
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [ "dc-datastore", "management-server", "cache-datastore", "keyvaluemap-
datastore", "counter-datastore", "kms-datastore" ],
"uUID" : "13cee956-d3a7-4577-8f0f-1694564179e4"
},
{
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "jmx.rmi.port",
"value" : "1100"
}, ... ]
},
"type" : [ "router" ],
"uUID" : "de8a0200-e405-43a3-a5f9-eabafdd990e2"
}]
For more information about pods, see About planets, regions, pods, organizations,
environments and virtual hosts.
App IDs are automatically added to an OAuth access token. Therefore, after you use
the procedure below to enable token access for an organization, you can access
tokens by app ID.
To retrieve and revoke OAuth 2.0 access tokens by end user ID, an end user ID must
be present in the access token. The procedure below describes how add an end user
ID to an existing token or to new tokens.
By default, when Edge generates an OAuth 2.0 access token, the token has the
format:
{
"issued_at" : "1421847736581",
"application_name" : "a68d01f8-b15c-4be3-b800-ceae8c456f5a",
"scope" : "READ",
"status" : "approved",
"api_product_list" : "[PremiumWeatherAPI]",
"expires_in" : "3599",
"developer.email" : "tesla@weathersample.com",
"organization_id" : "0",
"token_type" : "BearerToken",
"client_id" : "k3nJyFJIA3p62DWOkLO6OJNi87GYXFmP",
"access_token" : "7S22UqXGJDTuUADGzJzjXzXSaGJL",
"organization_name" : "myorg",
"refresh_token_expires_in" : "0",
"refresh_count" : "0"
}
● The application_name field contains the UUID of the app associated with
the token. If you enable retrieval and revocation of OAuth 2.0 access tokens
by app ID, this is the app ID you use.
● The access_token field contains the OAuth 2.0 access token value.
To enable retrieval and revocation of OAuth 2.0 access tokens by end user ID,
configure the OAuth 2.0 policy to include the user ID in the token, as described in the
procedure below.
The end user ID is the string that Edge uses as the developer ID, not the developer's
email address. You can determine the developer's ID from the developer's email
address by using the Get Developer API call.
After you configure Edge to include the end user ID in the token, it is included as the
app_enduser field, as shown below:
{
"issued_at" : "1421847736581",
"application_name" : "a68d01f8-b15c-4be3-b800-ceae8c456f5a",
"scope" : "READ",
"app_enduser" : "6ZG094fgnjNf02EK",
"status" : "approved",
"api_product_list" : "[PremiumWeatherAPI]",
"expires_in" : "3599",
"developer.email" : "tesla@weathersample.com",
"organization_id" : "0",
"token_type" : "BearerToken",
"client_id" : "k3nJyFJIA3p62DWOkLO6OJNi87GYXFmP",
"access_token" : "7S22UqXGJDTuUADGzJzjXzXSaGJL",
"organization_name" : "myorg",
"refresh_token_expires_in" : "0",
"refresh_count" : "0"
}
You must enable token access for each organization separately. Call the PUT API
below for each organization on which you want to enable the retrieval and revocation
of OAuth 2.0 access tokens by end user ID or app ID.
The user making the following call must be in the role orgadmin or opsadmin for the
organization. Replace the values with your org-specific values:
https://management_server_IP:8080/v1/organizations/org_name/oauth2
The orgadmin role should already have the necessary permissions. For the
opsadmin role for the /oauth2 resource, the permissions should look like this:
<ResourcePermission path="/oauth2">
<Permissions>
<Permission>get</Permission>
<Permission>put</Permission>
</Permissions>
</ResourcePermission>
You can use the Get Permission for a Single Resource API call to see which roles
have permissions for the /oauth2 resource.
Based on the response, you can use the Add Permissions for Resource to a Role and
Delete Permission for Resource API calls to make any necessary adjustments to
the /oauth2 resource permissions.
Use the following curl command to give the opsadmin role get and put
permissions for the /oauth2 resource. Replace the values with your org-specific
values:
Use the following curl command to revoke get and put permissions for the
/oauth2 resource from roles other than orgadmin and opsadmin. Replace the
values with your org-specific values:
conf_keymanagement_oauth_max_search_limit = 100
This property sets the page size used when fetching tokens. Apigee recommends a
value of 100, but you can set it as you see fit.
On a new installation, the property should be already set to 100. If you have to
change the value of this property, restart the Management Server and Message
Processor by using the commands:
Step 4: Configure the OAuth 2.0 policy that generates tokens to include end
user ID
Note: This task is optional. You only need to perform it if you want to retrieve and revoke OAuth
2.0 access tokens by end user ID.
Configure the OAuth 2.0 policy used to generate access tokens to include the end
user ID in the token. By including end user IDs in the access token, you can retrieve
and revoke tokens by ID.
To configure the policy to include an end user ID in an access token, the request that
creates the access token must include the end user ID and you must specify the
input variable that contains the end user ID.
You can then use the following curl command to generate the OAuth 2.0 access
token, passing the user ID as the appuserID header:
curl -H "appuserID:6ZG094fgnjNf02EK" \
https://myorg-test.apigee.net/oauth/client_credential/accesstoken?
grant_type=client_credentials \
-X POST -d
'client_id=k3nJyFJIA3p62TKIkLO6OJNXFmP&client_secret=gk5K5lIp943AY4'
In this example, the appuserID is passed as a request header. You can pass
information as part of a request in many ways. For example, as an alternative, you
can:
As a result, Apigee recommends that larger deployments use HTTP rather than RPC
for deployment.
Property Description
In addition to setting these properties in the body of the message, you must set the
Content-Type header to application/json or application/xml.
The following example calls the Update organization properties API with a JSON
message body.
Note: When updating properties, you must pass all existing properties to the API, even if they are
not being changed. If you omit existing properties from the payload in this API call, the properties
are removed. To get the current list of properties for the environment, use the Get organization
API.
curl -u admin_email:admin_password
"http://management_server_IP:8080/v1/organizations/org_name"
-X POST -H "Content-Type: application/json" -d
'{
"properties" : {
"property" : [
{
"name" : "allow.deployment.over.http",
"value" : "true"
},
{
"name" : "use.http.for.configuration",
"value" : "always"
}]
}
}'
Note: After enabling HTTP deployment, wait about 10 minutes for the changes to take effect.
Before the changes take effect, all deployment events continue to use RPC. After that,
deployment events should use HTTP.
To enable HTTP deployment on all API proxies across all your organizations, you
must update each organization as described above.
The following tools are used to communicate with, or maintain various components
of, the Apigee system.
import time
import socket
import sys
if len(sys.argv) <> 4:
print "Usage: %s address port 4-letter-cmd" % sys.argv[0]
else:
c = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
c.connect((sys.argv[1], int(sys.argv[2])))
c.send(sys.argv[3])
time.sleep(0.1)
print c.recv(512)
The total number of replicas for a keyspace across a Cassandra cluster is referred to
as the keyspace's replication factor. A replication factor of one means that there is
only one copy of each row in the Cassandra cluster. A replication factor of two
means there are two copies of each row, where each copy is on a different node. All
replicas are equally important; there is no primary or master replica.
NOTE
Apigee does not support changing the replication factor because it:
● May or may not increase overall availability when more than one node fails
● Might result in adverse effects on the overall ecosystem with increased latencies
In a production system with three or more Cassandra nodes in each data center, the
default replication factor for an Edge keyspace is three. As a general rule, the
replication factor should not exceed the number of Cassandra nodes in the cluster.
Use the following procedure to view the Cassandra schema, which shows the
replication factor for each Edge keyspace:
You can see that for data center 1, dc-1, the default replication factor for the kms
keyspace is three for an installation with three Cassandra nodes.
Note: For a smaller Cassandra cluster, such as a single node Cassandra cluster used in an Edge
all-in-one installation, the replication factor is one. However, an Edge production cluster should
always have three or more Cassandra nodes.
If you add additional Cassandra nodes to the cluster, the default replication factor is
not affected.
For example, if you increase the number of Cassandra nodes to six, but leave the
replication factor at three, you do not ensure that all Cassandra nodes have a copy of
all the data. If a node goes down, a higher replication factor means a higher
probability that the data on the node exists on one of the remaining nodes. The
downside of a higher replication factor is an increased latency on data writes.
When connecting to Cassandra for read and write operations, Message Processor
and Management Server nodes typically use the Cassandra value of LOCAL_QUORUM
to specify the consistency level for a keyspace. However, some keyspaces are
defined to use a consistency level of one.
LOCAL_QUORUM = (replication_factor/2) + 1
With LOCAL_QUORUM = 2, at least two of the three Cassandra nodes in the data
center must respond to a read/write operation for the operation to succeed. For a
three node Cassandra cluster, the cluster could therefore tolerate one node being
down per data center.
To see the consistency level used by the Edge Message Processor or Management
Server nodes:
If you add additional Cassandra nodes to the cluster, the consistency level is not
affected.
Apache Cassandra maintenance tasks
This section describes periodic maintenance tasks for Cassandra.
Anti-entropy maintenance
The Apache Cassandra ring nodes require periodic maintenance to ensure
consistency across all nodes. To perform this maintenance, use the following
command:
● Ref: https://support.datastax.com/hc/en-us/articles/204226329-How-to-
check-if-a-scheduled-nodetool-repair-ran-successfully
● Run during periods of relatively low workload (the tool imposes a significant
load on the system).
● Run at least every seven days in order to eliminate problems related to
Cassandra's "forgotten deletes".
● Run on different nodes on different days, or schedule it so that there are
several hours between running it on each node.
● Use the -pr option (partitioner range) to specify the primary partitioner range
of the node only.
If you enabled JMX authentication for Cassandra, you must include the username
and password when you invoke nodetool. For example:
You can also run the following command to check the supported options of
apigee_repair:
● Forgotten deletes
● Data consistency
● nodetool repair
NOTE: Apigee does not recommend adding, moving or removing Cassandra nodes without
contacting Apigee Edge Support. The Apigee system tracks Cassandra nodes using their IP
address, and performing ring maintenance without performing corresponding updates on the
Apigee environment metadata will cause undesirable results.
If you should find that Cassandra log files are taking up excessive space, you can
modify the amount of space allocated for log files by editing the log4j settings.
1. Edit /opt/apigee/customer/application/cassandra.properties
to set the following properties. If that file does not exist, create it:
conf_logback_maxfilesize=20MB
# max file size
Cassandra automatically performs the following operations to reduce its own disk
utilization:
All new installations of Apigee 4.51.00 or later will automatically set up Cassandra
with LeveledCompactionStrategy. However, if you are using an older version of
Apigee or have upgraded to Apigee 4.51.00 from an older version, your version may
still be using Cassandra with SizeTieredCompactionStrategy. To find out
which compaction strategy your version of Cassandra is using, see the section
Check compaction strategy.
Preparation
Check the existing compaction strategy
To check the existing compaction strategy on column families, follow the
instructions in Check compaction strategy. If the compaction strategy is already
LeveledCompactionStrategy, you don't need to follow the remaining
instructions on this page.
Backup
Compaction throughput
Run the following nodetool command on each of the nodes to set compaction
throughput to max out at 128MB on all C* nodes:
Staggered runs
You might not be able to complete the change of all nodes within a day, especially if
you operate large Cassandra clusters, as indices need to be rebuilt on each node one
by one. You could change the compaction strategy of one schema or one column
family (table) at a time. For this, alter the column family to change its compaction
strategy and then rebuild all indices on the table (if any) on all the nodes. Then repeat
the above procedure for each table or keyspace. Such runs for one table or one
keyspace can be broken down to be run across different days.
Run the following Cassandra Query Language (CQL) commands on any one
Cassandra node, changing strategy for one keyspace at a time. You can run CQLs
can at the cql prompt. To invoke the cql prompt:
/opt/apigee/apigee-cassandra/bin/cqlsh `hostname -i
`
Rebuild indices
You need to execute this step after the compaction strategy change. Run the
following nodetool commands one by one on each Cassandra node one after the
other.
Notes:
Roll back
In case you need to roll back, you can follow one of the options below:
Run the following CQLs on any one Cassandra node changing strategy for one
keyspace at a time. You can run CQLs at the cql prompt. To invoke the cql prompt:
:
/opt/apigee/apigee-cassandra/bin/cqlsh `hostname -i`
After all the column families have been altered, if you have already rebuilt indices
while changing compaction strategy to LeveledCompactionStrategy, you will
need to rebuild indices again. Follow the same steps as earlier for rebuilding all
indices. If you didn’t rebuild indices earlier, you don’t need to rebuild indices during
the rollback.
To perform a full data restore, see the instructions in Restore from backiup.
You can run CQLs at the cql prompt. To invoke cql prompt:
In Edge, ZooKeeper nodes contain configuration data about the location and
configuration of the various Edge components, and notifies the different
components of configuration changes. All supported Edge topologies for a
production system use at least three ZooKeeper nodes.
Note: Three ZooKeeper nodes is the minimum number required. You can add additional
ZooKeeper nodes as necessary based on your system requirements.
Use the ZK_HOSTS and ZK_CLIENT_HOSTS properties in the Edge config file to
specify the ZooKeeper nodes. For example:
Where:
By default, all ZooKeeper nodes are designated as voter nodes. That means the
nodes all participate in electing the ZooKeeper leader. You can include the
:observer modifier with ZK_HOSTS to signify that the node is an observer node,
and not a voter. An observer node does not participate in the election of the leader.
You typically specify the :observer modifier when creating multiple Edge Data
Centers, or when a single Data Center has a large number of ZooKeeper nodes. For
example, in a 12-host Edge installation with two Data Centers, ZooKeeper on node 9
in Data Center 2 is the observer:
You then use the following settings in your config file for Data Center 1:
The following sections contain more details about leader, voter, and observer nodes
and describe the considerations of adding voter and observer nodes.
If there is not a quorum of voter nodes available, no leader can be elected. In this
scenario, Zookeeper cannot serve requests. That means you cannot make a request
to the Edge Management Server, process Management API requests, or log in to the
to Edge UI until the quorum is restored.
Note: ZooKeeper observer nodes do not participate in the voting and are not counted when
determining a quorum.
● You installed three ZooKeeper nodes per Data Center, for a total of six nodes
● Data Center 1 has three voter nodes
● Data Center 2 has two voter nodes and one observer node
● The quorum is based on the five voters across both Data Centers, and is
therefore three functioning voter nodes
● If only two or fewer voter nodes are available then the ZooKeeper ensemble
cannot function
Your system requirements might require that you add additional ZooKeeper nodes to
your Edge installation. The Adding ZooKeeper nodes documentation describes how
to add additional ZooKeeper nodes to Edge. When adding ZooKeeper nodes, you
must take into consideration the type of nodes to add: voter or observer.
You want to make sure to have enough voter nodes so that if one or more voter
nodes are down the ZooKeeper ensemble can still function, meaning there is still a
quorum of voter nodes available. By adding voter nodes, you increase the size of the
quorum, and therefore you can tolerate more voter nodes being down.
However, adding additional voter nodes can negatively affect write performance
because write operations require the quorum to agree on the leader. The time it
takes to determine the leader is based on the number of voter nodes, which
increases as you add more voter nodes. Therefore, you do not want to make all the
nodes voters.
Rather than adding voter nodes, you can add observer nodes. Adding observer nodes
increases the overall system read performance without adding to the overhead of
electing a leader because observer nodes do not vote and do not affect the quorum
size. Therefore, if an observer node goes down, it does not affect the ability of the
ensemble to elect a leader. However, losing observer nodes can cause degradation
in the read performance of the ZooKeeper ensemble because there are fewer nodes
available to service data requests.
In a single Data Center, Apigee recommends that you have no more than five voters
regardless of the number of observer nodes. In two Data Centers, Apigee
recommends that you have no more than nine voters (five in one Data Center and
four in the other). You can then then add as many observer nodes as necessary for
your system requirements.
There are many reasons you might want to remove a Zookeeper node; for example, a
node got corrupted or it was added to the wrong environment.
This section describes how to remove a Zookeeper node when the node is down and
not reachable.
1. Edit your silent configuration file and remove the IP address of the Zookeeper
node that you want to remove.
2. Re-run the setup command for Zookeeper to reconfigure the remaining
ZooKeeper nodes:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper setup -f
updated_config_file
4. Restart all Zookeeper nodes:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
6. Reconfigure the Management Server node:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server setup -f
updated_config_file
Maintenance considerations
When working with multiple Data Centers, remember that the ZooKeeper ensemble
does not distinguish between Data Centers. ZooKeeper assemblies view all of the
ZooKeeper nodes across all Data Centers as one ensemble.
The location of the voter nodes in a given Data Center is not a factor when
ZooKeeper performs quorum calculations. Individual nodes can go down across
Data Centers, but as long as a quorum is preserved across the entire ensemble,
ZooKeeper remains functional.
Maintenance implications
At various times, you will have to take a ZooKeeper node down for maintenance,
either a voter node or an observer node. For example, you might have to upgrade the
version of Edge on the node, the machine hosting ZooKeeper might fail, or the node
might become unavailable for some other reason such as a network error.
If the node that goes down is an observer node, you can expect a slight degradation
in the performance of the ZooKeeper ensemble until the node is restored. If the node
is a voter node, it can impact the viability of the ZooKeeper ensemble due to the loss
of a node that participates in the leader election process. Regardless of the reason
for the voter node going down, it is important to maintain a quorum of available voter
nodes.
Maintenance procedure
When these conditions are met, a ZooKeeper ensemble of arbitrary size can tolerate
the loss of a single node at any point with no loss of data or meaningful impact to
performance. This means you are free to perform maintenance on any node in the
ensemble as long as it is on one node at a time.
Summary
1. Edit /opt/apigee/customer/application/zookeeper.properties
to set the following properties. If that file does not exist, create it.
2. Set the following properties in zookeeper.properties:
3. # Set the snapshot count. In this example set it to 10:
4. conf_zoo_autopurge.snapretaincount=10
5.
# Set the purge interval. In this example, set is to 240 hours:
conf_zoo_autopurge.purgeinterval=240
6. Make sure the file is owned by the "apigee" user:
7. chown apigee:apigee
/opt/apigee/customer/application/zookeeper.properties
8. Restart ZooKeeper by using the command:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
1. Edit /opt/apigee/customer/application/zookeeper.properties
to set the following properties. If that file does not exist, create it.
2. Set the following properties in zookeeper.properties:
3. conf_log4j_log4j.appender.rollingfile.maxfilesize=10MB
4. # max file size
5. conf_log4j_log4j.appender.rollingfile.maxbackupindex=50 # max open files
6. Make sure the file is owned by the "apigee" user:
7. chown apigee:apigee
/opt/apigee/customer/application/zookeeper.properties
8. Restart ZooKeeper by using the command:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
4. uid: 29383a67-9279-4aa8-a75b-cfbf901578fc
5. Use ldappasswd to set the user's password based on the user's uid:
ldappasswd -h LDAP_IP -p 10389 -D "cn=manager,dc=apigee,dc=com" -W -s
newPassWord \
6. "uid=29383a67-9279-4aa8-a75b-
cfbf901578fc,ou=users,ou=global,dc=apigee,dc=com"
7. You are prompted for the OpenLDAP admin password.
4. "uid=admin,ou=users,ou=global,dc=apigee,dc=com"
5. You are prompted for the OpenLDAP admin password.
6. Update the config file that you used to install the Edge UI with the new Edge
system password:
7. APIGEE_ADMINPW=newPassWord
8. Configure and restart the Edge UI:
9. /opt/apigee/apigee-setup/bin/setup.sh -p ui -f configFile
10. (Only if TLS is enabled on the UI) Re-enable TLS on the Edge UI as described
in Configuring TLS for the management UI.
If OpenLDAP does not start, try starting it in debug mode and check for errors:
Perform the steps in the following procedure on the OpenLDAP replicator node,
which replicates its data to the other OpenLDAP node. For example, if you are setting
replication from node1 to node2, run the commands on node1.
6. timeout=1
7. Make sure you replace appropriate value for the following placeholders:
● {NEW_HOST}: The new OpenLDAP host, to which you are planning to
replicate.
● {PORT}: The OpenLDAP port. The default port is 10389.
● {PASSWORD}: The OpenLDAP password.
8. Run the ldapmodify command:
ldapmodify -x -w {PASSWORD} -D "cn=admin,cn=config" -H "ldap://{HOST}:{PORT}/" -
f repl.ldif
9.
10. The output should be similar to the following:
11.
12. modifying entry "olcDatabase={2}bdb,cn=config"
13. Verify replication:
14. ldapsearch -H ldap://{HOST}:{PORT} -LLL -x -b "cn=config" -D
"cn=admin,cn=config" -w {PASSWORD} -o ldif-wrap=no 'olcSyncRepl' | grep
olcSyncrepl
15. The output should be similar to the following:
16. olcSyncrepl: {0}rid=001 provider=ldap://{NEW_HOST}:{PORT}/
binddn="cn=manager,dc=apigee,dc=com" bindmethod=simple
credentials={PASSWORD} searchbase="dc=apigee,dc=com" attrs="*,+"
type=refreshAndPersist retry="60 1 300 12 7200 +" timeout=1
17. You can verify that replication is working correctly by reading and comparing
the contextCSN value from each server and ensuring that they match.
18. ldapsearch -w {PASSWORD} -D "cn=manager,dc=apigee,dc=com" -b
"dc=apigee,dc=com" -LLL -h localhost -p 10389 contextCSN | grep contextCSN
/v1/organizations/org_name/environments/environment_name/ldapresources
<LdapResource name="ldap1">
<Connection>
<Hosts>
<Host port="636">foo.com</Host> <!-- port is optional: defaults to 389 for ldap://
and 636 for ldaps:// -->
</Hosts>
<SSLEnabled>false</SSLEnabled> <!-- optional, defaults to false -->
<Version>3</Version> <!-- optional, defaults to 3-->
<Authentication>simple</Authentication> <!-- optional, only simple supported -->
<ConnectionProvider>jndi|unboundid</ConnectionProvider> <!-- required -->
<ServerSetType>single|round robin|failover</ServerSetType> <!-- not applicable for
jndi -->
<LdapConnectorClass>com.custom.ldap.MyProvider</LdapConnectorClass> <!-- If
using a custom LDAP provider, the fully qualified class -->
</Connection>
<ConnectPool enabled="true"> <!-- enabled is optional, defaults to true -->
<Timeout>30000</Timeout> <!-- optional, in milliseconds; if not set, no timeout -->
<Maxsize>50</Maxsize> <!-- optional; if not set, no max connections -->
<Prefsize>30</Prefsize> <!-- optional; if not set, no pref size -->
<Initsize></Initsize> <!-- optional; if not set, defaults to 1 -->
<Protocol></Protocol> <!-- optional; if not set, defaults to 'ssl plain' -->
</ConnectPool>
<Admin>
<DN>cn=admin,dc=apigee,dc=com</DN>
<Password>secret</Password>
</Admin>
</LdapResource>
Example
The simplest way to rotate certificates without a custom CA is to uninstall and re-
install apigee-mtls. This removes all old certificates present, and generates fresh
certificates locally. You can do this with minimal downtime by performing the
following commands on each host, one at a time:
Note: This assumes the same silent.conf file that was used for the initial
installation is present.
8. rm -rf /opt/apigee/data/apigee-mtls
9. Copy the new cert/key pair generated in the first step into the following
locations, and update permissions:
cp ${new_cert} /opt/apigee/apigee-mtls/certs/local_cert.pem
chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/certs/local_cert.pem
chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/certs/local_cert.pem
cp ${new_cert} /opt/apigee/apigee-mtls/source/certs/local_cert.pem
chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem
chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem
cp ${new_key} /opt/apigee/apigee-mtls/certs/local_key.pem
chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem
chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem
cp ${new_key} /opt/apigee/apigee-mtls/source/certs/local_key.pem
chmod \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
/opt/apigee/apigee-mtls/source/certs/local_cert.pem
chown \
--reference=/opt/apigee/apigee-mtls/certs/ca_cert.pem \
10. /opt/apigee/apigee-mtls/source/certs/local_cert.pem
11. Restart apigee-mtls:
12. /opt/apigee/apigee-service/bin/apigee-service apigee-mtls start
13. Restart all core Apigee components:
14. /opt/apigee/apigee-service/bin/apigee-all start
15. See Start/stop/check all components.
NOTE: Postgres autovacuum is enabled by default, so you do not have to explicitly run the
VACUUM command. Autovacuum reclaims storage by removing obsolete data from the
PostgreSQL database.NOTE: Apigee does not recommend moving the PostgreSQL database
without contacting Apigee Customer Success. The Apigee system uses the PostgreSQL
database server's IP address. As a result, moving the database or changing its IP address
without performing corresponding updates on the Apigee environment metadata will cause
undesirable results.
NOTE: No pruning is required for Cassandra. Expired tokens are automatically purged after 180
days.
Childfactables are daily-partitioned fact data. Every day new partitions are created
and data gets ingested into the daily partitioned tables. So at a later point in time,
when the old fact data will not be required, you can purge the respective
childfactables.
● Delete-from-parent-fact Default : No. Will also delete data older than retention
days from parent fact table.
● Confirm-delete-from-parent-fact. Default: No. If No, the script will prompt for
confirmation before deleting data from parent fact. Set to Yes if the purge
script is automated.
NOTE: If you have configured master-standby replication between Postgres servers, then you
only need to run this script on the Postgres master.
While some component configuration files allow you to use host names rather than
IP addresses, using host names can complicate troubleshooting. For example, host
names can be the source of issues related to DNS server connectivity, lookup
failures, and synchronization.
For host names and IP addresses, consider the implications of the following
scenarios when moving Apigee servers:
● application/x-www-form-urlencoded' -X POST
● For type="reportcrud-datastore":
curl -u ADMINEMAIL:PW "http://MSIP:port/v1/servers" -d \
"Type=reportcrud-datastore&InternalIP=NEW_IP®ion=REGION&pod=analytics" \
Change the IP Address and restart the ZooKeeper ensemble (for multi-node
ensemble configurations only)
1. Open /opt/apigee/apigee-zookeeper/conf/zoo.cfg in an editor.
You should see all ZooKeeper IP addresses and default setting in the form:
2. server.1=192.168.56.101:2888:3888
3. server.2=192.168.56.102:2888:3888
4. server.3=192.168.56.103:2888:3888
5. Save that information.
6. On each ZooKeeper node, edit the file
/opt/apigee/customer/application/zookeeper.properties file to
set the conf_zoo_quorum property to the correct IP addresses. If the file
does not exist, create it.
7. conf_zoo_quorum=server.1=192.168.56.101:2888:3888\
nserver.2=192.168.56.102:2888:3888\nserver.3=192.168.56.104:2888:3888\n
8. Ensure that you insert "\n" after each IP address and that entries are in the
same order on every node.
9. Find the leader of the ZooKeeper ensemble by using the following command
(replace node with the IP address of the Zookeeper machine):
10. echo srvr | nc node 2181
11. The Mode line in the output should say "leader".
12. Restart one ZooKeeper after the other starting with the leader and ending with
the node on which the IP address was changed. If more than one zookeeper
node changed IP addresses it may be necessary to restart all nodes.
13. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart
14. Use the echo command described above to verify each ZooKeeper node.
1. Use the following curl command to register the new internal and external IP
address:
curl -u ADMINEMAIL:PW -X PUT \
http://MSIP:8080/v1/servers/uuid -d ExternalIP=ip
curl -u ADMINEMAIL:PW -X PUT \
2. http://$MSIP:8080/v1/servers/uuid -d InternalIP=ip
3. Where uuid is the UUID of the node.
For information on how to get a component's UUID, see Get UUIDs.
If old master is restored at some time in the future, make it a standby node:
2. PG_STANDBY=IPorDNSofOldMaster
3. Note: Apigee strongly recommends that you use IP addresses rather than host names for
the PG_MASTER and PG_STANDBY properties in your silent configuration file. In addition,
you should be consistent on both nodes.
If you use host names rather than IP addresses, you must be sure sure that the host
names properly resolve using DNS.
4. Enable replication on the new master:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-master -f configFIle
6. On the old master, edit the config file to set:
PG_MASTER=IPorDNSofNewMaster
7. PG_STANDBY=IPorDNSofOldMaster
8. Stop apigee-postgresql on the old master:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
10. On the old master, clean out any old Postgres data:
11. rm -rf /opt/apigee/data/apigee-postgresql/
12. Note: If necessary, you can backup this data before deleting it.
13. Configure the old master as a standby:
14. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-
replication-on-standby -f configFile
15. On completion of replication, verify the replication status by issuing the
following scripts on both servers. The system should display identical results
on both servers to ensure a successful replication:
a. On the master node, run:
b. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
postgres-check-master
c. Verify that it says it is the master.
d. On the standby node:
e. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql
postgres-check-standby
f. Verify that it says it is the standby.
Note: You can only use this procedure if you have monetization enabled.
1. Create a configuration file with the configurations below. The file should be
readable by the apigee user.
# IP address of a zookeeper node
ZK_HOSTS=XX.XX.XX.XX
# Postgres Host IP
MO_PG_HOST=XX.XX.XX.XX
# [OPTIONAL] Comma separated list of org names for which monetization is enabled
2. MO_ORG_NAMES=org1,org2
3. Run the following command on one of the management server nodes:
4. $APIGEE_ROOT/apigee-service/bin/apigee-service edge-mint-management-
server mint-pg-registration -f <path-to-config-file>
5. Restart all Management server and Message processor nodes.
$APIGEE_ROOT/apigee-service/bin/apigee-service edge-management-server restart
6. $APIGEE_ROOT/apigee-service/bin/apigee-service edge-message-processor
restart
Management Server plays a vital role in holding all other components together in an
on-premises installation of Edge Private Cloud. You can check for user, organization
and deployment status on the Management Server by issuing the following curl
commands:
curl -u adminEmail;:admin_passwd
http://localhost:8080/v1/organizations/orgname/deployments
The system should display 200 HTTP status for all calls. If these fail, do the
following:
The commands shown below take a config file as input. For example, you pass a
config file to the setup-org command to define all the properties of the organization,
including the environment and virtual host.
For a complete config file, and information on the properties that you can set in the
config file, see Onboard an organization.
A virtual host on Edge defines the domains and Edge Router ports on which an API
proxy is exposed, and, by extension, the URL that apps use to access an API proxy. A
virtual host also defines whether the API proxy is accessed by using the HTTP
protocol, or by the encrypted HTTPS protocol.
Use the scripts and API calls shown below to create a virtual host. When you create
the virtual host, you must specify the following information:
● The name of the virtual host that you use to reference it in your API proxies.
● The port on the Router for the virtual host. Typically these ports start at 9001
and increment by one for every new virtual host.
● The host alias of the virtual host. Typically the DNS name of the virtual host.
The Edge Router compares the Host header of the incoming request to the list
of host aliases as part of determining the API proxy that handles the request.
When making a request through a virtual host, either specify a domain name
that matches the host alias of a virtual host, or specify the IP address of the
Router and the Host header containing the host alias.
For example, if you created a virtual host with a host alias of myapis.apigee.net on
port 9001, then execute a curl request to an API through that virtual host could use
one of the following forms:
Options when you do not have a DNS entry for the virtual host
One option when you do not have a DNS entry is to set the host alias to the IP
address of the Router and port of the virtual host, as routerIP:port. For example:
192.168.1.31:9001
curl http://routerIP:9001/proxy-base-path/resource-path
This option is preferred because it works well with the Edge UI.
If you have multiple Routers, add a host alias for each Router, specifying the IP
address of each Router and port of the virtual host.
Alternatively, you can set the host alias to a value, such as temp.hostalias.com.
Then, you have to pass the Host header on every request:
Or, add the host alias to your /etc/hosts file. For example, add this line to
/etc/hosts:
192.168.1.31 temp.hostalias.com
curl -v http://myapis.apigee.net:9001/proxy-base-path/resource-path
Creating an organization, environment,
and virtual host
You can create an organization, environment, and virtual host on the command line
in a single command, or you can create each one separately. In addition, you can use
the Edge API to perform many of these actions.
Video: Watch a short video for an overview of Apigee Organization setup and
configuration.
NOTE: When creating a virtual host, you specify the Router port used by the virtual host.
For example, port 9001. By default, the Router runs as the user "apigee" which does not have
access to privileged ports, which are typically ports 1024 and below. To create a virtual host that
binds the Router to a protected port, you must configure the Router to run as a user with access
to those ports.
Where configFile is the path to a configuration file that looks similar to the following:
● Creates the organization.NOTE: You cannot rename an organization after you create
it.
● Associates the organization with the "gateway" pod. You cannot change this.
● Adds the specified user as the organization admininstrator. If the user does
not exist, you can create one.
● Creates one or more environments.
● Creates one or more virtual hosts for each environment.
● Associates the environment with all Message Processor(s).
● Enables analytics.
By default, the maximum length of the organization name and environment name is
20 characters when using the apigee-provision utility. This limit does not apply
if you use the Edge API directly to create the organization or environment. Both the
org name and environment name must be lower case.
NOTE: Rather than using the apigee-provision utility, you can use a set of additional scripts,
or the Edge API itself, to create and configure an organization. For example, use thee scripts to
add an organization and associate it with multiple pods. The following sections describe those
scripts and APIs in more detail.
Create an organization
Use the create-org command to create an organization, as the following example
shows:
The configuration file contains the name of the organization and the email address
of the organization admin. The characters you can use in the name attribute are
restricted to a-z0-9\-$%. Do not use spaces, periods or upper-case letters in the
name:
You can use the following API calls to create an organization. The first call creates
the organization:
curl -H "Content-Type:application/x-www-form-urlencoded" \
-u sysAdminEmail:adminPasswd -X POST \
http://management_server_IP:8080/v1/organizations/org_name/pods \
-d "region=default&pod=gateway"
The final call adds an existing user as the org admin for the org:
If the user does not exist, you can use the following call to create the user as
described in Adding a user.
Create an environment
Use the add-env script to create an environment in an existing organization:
This config file contains the information necessary to create the environment and
virtual host:
Alternatively, you can use the following API calls to create an environment. The first
call creates the environment:
The next call associates the environment with a Message Processor. Make this call
for each Message Processor that you want to associate with the environment:
curl -H "Content-Type:application/x-www-form-urlencoded" \
-u sysAdminEmail:adminPasswd -X POST \
http://management_server_IP:8080/v1/organizations/org_name/environments/
env_name/servers \
-d "action=add&uuid=uuid"
Where uuid is the UUID of Message Processor. You can obtain the UUID by using the
command:
curl http://Message_Processor_IP:8082/v1/servers/self
The next API call enables Analytics for a given environment. It validates the
existence of Qpid and Postgres Servers in the PODs of all datacenters. Then it starts
the Analytics onboarding for the given organization and environment.
NOTE: Rather than use the following API call, use the following command to enable Analytics:
/opt/apigee/apigee-service/bin/apigee-service apigee-provision enable-ax -f configFile
{
"properties" : {
"samplingAlgo" : "reservoir_sampler",
"samplingTables" : "10=ten;1=one;",
"aggregationinterval" : "300000",
"samplingInterval" : "300000",
"useSampling" : "100",
"samplingThreshold" : "100000"
},
"servers" : {
"postgres-server" : [ "1acff3a5-8a6a-4097-8d26-d0886853239c", "f93367f7-edc8-
4d55-92c1-2fba61ccc4ab" ],
"qpid-server" : [ "d3c5acf0-f88a-478e-948d-6f3094f12e3b", "74f67bf2-86b2-44b7-
a3d9-41ff117475dd"]
}
}
The output of this command is a JSON object. For each Qpid server, you will see
output in the form:
"type" : [ "qpid-server" ],
"uUID" : "d3c5acf0-f88a-478e-948d-6f3094f12e3b"
For each Postgres server, you will see output in the form:
"type" : [ "postgres-server" ],
"uUID" : "d3c5acf0-f88a-478e-948d-6f3094f12e3b"
Create a virtual host
You can create a virtual host in an existing environment in an organization. Often an
environment supports multiple virtual hosts. For example, one virtual host might
support the HTTP protocol, while another virtual host in the same environment
supports the encrypted HTTPS protocol.
Use the following API call to create additional virtual hosts, or to create a virtual host
for an environment with no virtual host:
Deleting a virtual
host/environment/organization
This section shows how to remove organizations, environments, and virtual hosts.
The order of API calls is very important; for example, the step to remove an
organization can only be executed after you remove all associated environments in
the organization.
Delete an environment
You can only delete an environment after you have:
NOTE: To retrieve the UUID of the Message Processor, execute the following command:
curl "http://mp_IP:8082/v1/servers/self"
Clean up analytics
If you are unsure of the name of the analytics group, use the following command to
display it:
This command returns the analytics group name in the name field.
Drop fact and aggregate tables for specific Organization and Environment
Warning: This step is irrreversible. Create a backup before proceding.
To delete an environment:
curl -u ADMIN_EMAIL:ADMIN_PASSWORD \
"http://ms_IP:8080/v1/organizations/org_name/environments/env_name" \ -X
DELETE
Delete an organization
You can only delete an organization after you have:
b. -X POST 'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew'
c. Add a consumer group to the new analytics group, named consumer-
group-new. Consumer group names are unique within the context of
each analytics group:
d. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups?name=consumer-group-new"
e. Set the consumer type of the analytics group to "ax":
f.
"https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/properties?
propName=consumer-type&propValue=ax"
g. Add the data center name. By default, you install Edge with a data
center named "dc-1". However, if you have multiple data centers, they
each have a unique name. This call is optional if you only have a single
data center, and recommended if you have multiple data centers:
d. -X POST
'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/servers?
uuid=UUID&type=postgres-server&force=true'
e. If you have multiple Postgres servers configured as a master/standby
pair, then add them by specifying a comma-separated list of UUIDs:
f. -X POST
'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/servers?
uuid=UUID_Master,UUID_standby&type=postgres-server&force=true'
g. This command returns the information about the analytics group,
including the UUID of the Postgres server in the postgres-server
property under uuids:
h. {
i. "name" : "axgroupNew",
j. "properties" : {
"region" : "dc-1",
"consumer-type" : "ax"
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ ],
"postgres-server" : [ "2cb7211f-eca3-4eaf-9146-66363684e220" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-new",
"consumers" : [ ],
"datastores" : [ ],
"properties" : {
}
} ],
"data-processors" : {
}
}
k. Add the Postgres server to the data store of the consumer group. This
call is required to route analytics messages from the Qpid servers to
the Postgres servers:
l. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups/consumer-group-new/datastores?uuid=UUID"
m. If multiple Postgre servers are configured as a master/standby pair,
then add them by specifying a comma-separated list of UUIDs:
n. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups/consumer-group-new/datastores?
uuid=UUID_Master,UUID_standby"
o. The UUID appears in the datastores property of the consumer-
groups in the output.
3. Add the UUIDs of all Qpid servers to the new analytics group. You must
perform this step for all Qpid servers.
a. To get the UUIDs of the Qpid servers, run the following cURL command
on every Qpid server node:
b. curl -u sysAdminEmail:passWord https://QP_IP:8083/v1/servers/self
c. Add the Qpid server to the analytics group:
d. -X POST
'https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/servers?
uuid=UUID&type=qpid-server'
e. Add the Qpid server to the consumer group:
f. "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/consumer-
groups/consumer-group-new/consumers?uuid=UUID"
g. This call returns the following where you can see the UUID of the Qpid
server added to the qpid-server property under uuids, and to the
consumers property under consumer-groups:
h. {
i. "name" : "axgroupNew",
j. "properties" : {
"region" : "dc-1",
"consumer-type" : "ax"
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "fb6455c3-f5ce-433a-b98a-bdd016acd5af" ],
"postgres-server" : [ "2cb7211f-eca3-4eaf-9146-66363684e220" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-new",
"consumers" : [ "fb6455c3-f5ce-433a-b98a-bdd016acd5af" ],
"datastores" : [ "2cb7211f-eca3-4eaf-9146-66363684e220" ],
"properties" : {
}
} ],
"data-processors" : {
}
}
4. Provision an organizations and environment for the new AX group:
curl -u sysAdminEmail:passWord -H "Content-Type: application/json"
5. -X POST "https://MS_IP:8080/v1/analytics/groups/ax/axgroupNew/scopes?
org=org_name&env=env_name"
Many of the operations that you perform to manage users requires system
administrator privileges. In a Cloud based installation of Edge, Apigee functions in
the role of system administrator. In an Edge for the Private Cloud installation, your
system administrator must perform these tasks as described below.
Adding a user
You can create a user either by using the Edge API, the Edge UI, or Edge commands.
This section describes how to use Edge API and Edge commands. For information
on creating users in the Edge UI, see Creating global users.
After you create the user in an organization, you must assign a role to the user. Roles
determine the access rights of the user on Edge.
Note: The user cannot log in to the Edge UI, and does not appear in the list of users in the Edge
UI, until you assign it to a role in an organization.
Use the following command to create a user with the Edge API:
curl -H "Content-Type:application/xml" \
-u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD -X POST
http://ms_IP:8080/v1/users \
-d '<User> \
<FirstName>New</FirstName> \
<LastName>User</LastName> \
<Password>NEW_USER_PASSWORD</Password> \
<EmailId>foo@bar.com</EmailId> \
</User>'
Where the configFile creates the user, as the following example shows:
You can then use this call to view information about the user:
curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/users/foo@bar.com
This call displays all the roles assigned to the user. If you want to add the user, but
display only the new role, use the following call:
You can view the user's roles by using the following command:
curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/users/foo@bar.com/userroles
To remove a user from an organization, remove all roles in that organization from the
user. Use the following command to remove a role from a user:
● Create orgs
● Add Routers, Message Processors, and other components to an Edge
installation
● Configure TLS/SSL
● Create additional system administrators
● Perform all Edge administrative tasks
While only a single user is the default user for administrative tasks, there can be
more than one system administrator. Any user who is a member of the "sysadmin"
role has full permissions to all resources.
Note: The "sysadmin" role is unique in that a user assigned to that role does not have to be part
of an organization. However, you typically assign it to an organization, otherwise that user cannot
log in to the Edge UI.
You can create the user for the system administrator in either the Edge UI or API.
However, you must use the Edge API to assign the user to the role of "sysadmin".
Assigning a user to the "sysadmin" role cannot be done in the Edge UI.
3. -X POST http://ms_IP:8080/v1/userroles/sysadmin/users -d
'id=foo@bar.com'
4. Ensure that the new user is in the "sysadmin" role:
5. curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/userroles/sysadmin/users
6. Returns the user's email address:
7. [ " foo@bar.com " ]
8. Check permissions of new user:
9. curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/users/foo@bar.com/permissions
10. Returns:
11. {
12. "resourcePermission" : [ {
13. "path" : "/",
"permissions" : [ "get", "put", "delete" ]
}]
}
14. After you add the new system administrator, you can add the user to any orgs.
Note: The new system administrator user cannot log in to the Edge UI until you add the
user to at least one org.
15. If you later want to remove the user from the system administrator role, you
can use the following API:
curl -X DELETE -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \
16. http://ms_IP:8080/v1/userroles/sysadmin/users/foo@bar.com
17. Note that this call only removes the user from the role, it does not delete the
user.
curl -u SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD
http://ms_IP:8080/v1/userroles/sysadmin/users
ADMIN_EMAIL=foo@bar.com
Note: As part of changing the default system administrator, you have to update the Edge UI.
4. SMTPSSL=y
5. Note that you must include the SMTP properties because all properties on the
UI are reset.
6. Reconfigure the Edge UI:
/opt/apigee/apigee-service/bin/apigee-service edge-ui stop
/opt/apigee/apigee-service/bin/apigee-service edge-ui setup -f configFile
If you just want to change the email address of the user account for the current
default system administrator, you first update the user account to set the new email
address, then change the default system administrator email address:
1. Update the user account of the current default system administrator user with
a new email address:
curl -H content-type:application/json -X PUT -u
CURRENT_SYS_ADMIN_EMAIL:SYS_ADMIN_PASSWORD \
http://ms_IP:8080/v1/users/CURRENT_SYS_ADMIN_EMAIL \
By default, the required domain is empty, meaning you can add any email address to
the "sysadmin" role.
Deleting a user
You can create a user either by using the Edge API or the Edge UI. However, you can
only delete a user by using the API.
To see the list of current users, including email address, use the following curl
command:
This section shows multiple methods you can use to get UUIDs of Private Cloud
components.
Note that the port numbers are different, depending on which component you call.
If you call the API from the machine itself, then you do not need to specify a
username and password. If you call the API remotely, you must specify the Edge
administrator's username and password, as the following example shows:
Each of these calls returns a JSON object that contains details about the service.
The uUID property specifies the service's UUID, as the following example shows:
{
"buildInfo" : {
...
},
...
"tags" : {
...
},
"type" : [ "router" ],
"uUID" : "71ad42fb-abd1-4242-b795-3ef29342fc42"
}
Use apigee-adminapi.sh
You can get the UUIDs of some components by using the servers list option of
the apigee-adminapi.sh utility. To get UUIDs with apigee-adminapi.sh, use
the following syntax:
Where:
For example:
This command returns a complex JSON object that specifies the same properties for
each service as the management API calls.
As with the management API calls, you can optionally set the Accept header to
application/xml to instruct apigee-adminapi.sh to return XML rather than
JSON. For example:
You must submit the statistics for your production API deployments, but not for APIs
in development or testing deployments. In most Edge installations, you will define
specific organizations or environments for your production APIs. The statistics that
you submit are only for those production organizations and environments.
You must repeat this process for every production organization and environment in
your Edge installation.
Use the following curl command to gather traffic data for a specific organization
and environment for a specified time interval:
This command uses the Edge Get API message count API. In this command:
{
"environments" : [ {
"dimensions" : [ {
"metrics" : [ {
"name" : "sum(message_count)",
"values": [
{
"timestamp": 1514847600000,
"value": "35.0"
},
{
"timestamp": 1514844000000,
"value": "19.0"
},
{
"timestamp": 1514840400000,
"value": "58.0"
},
{
"timestamp": 1514836800000,
"value": "28.0"
},
{
"timestamp": 1514833200000,
"value": "29.0"
},
{
"timestamp": 1514829600000,
"value": "33.0"
},
{
"timestamp": 1514826000000,
"value": "26.0"
},
{
"timestamp": 1514822400000,
"value": "57.0"
},
{
"timestamp": 1514818800000,
"value": "41.0"
},
{
"timestamp": 1514815200000,
"value": "27.0"
},
{
"timestamp": 1514811600000,
"value": "47.0"
},
{
"timestamp": 1514808000000,
"value": "66.0"
},
{
"timestamp": 1514804400000,
"value": "50.0"
},
{
"timestamp": 1514800800000,
"value": "41.0"
},
{
"timestamp": 1514797200000,
"value": "49.0"
},
{
"timestamp": 1514793600000,
"value": "35.0"
},
{
"timestamp": 1514790000000,
"value": "89.0"
},
{
"timestamp": 1514786400000,
"value": "42.0"
},
{
"timestamp": 1514782800000,
"value": "47.0"
},
{
"timestamp": 1514779200000,
"value": "21.0"
},
{
"timestamp": 1514775600000,
"value": "27.0"
},
{
"timestamp": 1514772000000,
"value": "20.0"
},
{
"timestamp": 1514768400000,
"value": "12.0"
},
{
"timestamp": 1514764800000,
"value": "7.0"
}
]
}
],
"name" : "proxy1"
} ],
"name" : "prod"
} ],
"metaData" : {
"errors" : [ ],
"notices" : [ "query served by:53dab80c-e811-4ba6-a3e7-b96f53433baa", "source
pg:6b7bab33-e732-405c-a5dd-4782647ce096", "Table used: myorg.prod.agg_api" ]
}
}
Disabling RPMs for patch releases
When performing an OS patch update using the command sudo yum update, if
the RPMS for Apigee have changed since their last update, the patch update will pull
down new Apigee RPMs. The next time components are restarted, change deltas will
be reported due to RPM changes, which you might not want to happen.
To prevent Apigee components from getting the new RPMs when a OS patch update
is done through the sudo yum update command, you can disable the Apigee
repos from getting automated upgrades. Later, when you need to apply Apigee
upgrades, you can re-enable those repos.
When upgrades to Apigee are needed, reenable the repos with this command:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
Note: In a Postgres master/standby configuration, you only backup the Master. You do not have
to backup the standby (or slave).
A RPO is the maximum tolerable period in which data might be lost from an IT
service due to a major incident. Both objectives must be taken into consideration
before you implement a backup plan for your recovery strategy.
How to back up
Note: When backing up a complete Edge installation, you must back up all components. See
"What to Backup" at Backup and restore for more.
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
For example:
To perform a backup:
1. Stop the component (except for PostgreSQL and Cassandra which must be
running to backup):
2. /opt/apigee/apigee-service/bin/apigee-service component_name stop
3. Run the backup command:
4. /opt/apigee/apigee-service/bin/apigee-service component_name backup
5. The backup command then:
● Creates a tar file of the following directories and files, where
component_name is the name of the component:
a. Directories
● /opt/apigee/data/component_name
● /opt/apigee/etc/component_name.d
b. Files if they exist
● /opt/apigee/token/application/
component_name.properties
● /opt/apigee/customer/application/
component_name.properties
● /opt/apigee/customer/defaults.sh
● /opt/apigee/customer/conf/license.txt
● Creates a .tar.gz file in the /opt/apigee/backup/component_name
directory. The file name has the following form:
● backup-year.month.day,hour.min.seconds.tar.gz
● For example:
● backup-2018.05.29,11.13.42.tar.gz
● For PostgreSQL, the file name has the following form:
● year.month.day,hour.min.seconds.dump
6. Start the component (except for PostgreSQL and Cassandra which must be
running to backup):
7. /opt/apigee/apigee-service/bin/apigee-service component_name start
If you have multiple Edge components installed on the same node, you can back up
them all with a single command:
/opt/apigee/apigee-service/bin/apigee-all backup
NOTE: Ensure that you stop the components, except for PostgreSQL and Cassandra, before
doing the backup. Then start the components after the backup completes.
This command creates a backup file for each component on the node.
● Uses the specified backup file or gets the latest backup file, if a filename was
not specified.
● Ensures that the component's data directories are empty.
● Stops the component. You must explicitly restart the component after a
restore.
2. /opt/apigee/etc/component_name.d
3. If they are not empty, delete their contents using commands like the following:
rm -r /opt/apigee/data/component_name
4. rm -r /opt/apigee/etc/component_name.d
5. Restore the previous configuration and data by using the following command:
6. /opt/apigee/apigee-service/bin/apigee-service component_name restore
backup_file
7. Where:
● component_name is the name of the component. Possible values
include:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
● backup_file is the name of the file you created when you backed up that
component; this value does not include the path, but does include the
"backup-" prefix and file extensions. For example, backup-
2019.03.17,14.40.41.tar.gz.
8. Note: backup_file should not include the path or backup- prefix, or the file extension. For
example, 2019.03.17,14.40.41.Note: In the case of apigee-postgresql
components, backup_file should not include the path or database name (for example,
apigee-) as a prefix, nor should it have the .dump file extension. For example, if file
name is apigee-2019.03.17,14.40.41.dump, you should only specify the datetime
part: 2019.03.17,14.40.41.
For example:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restore
backup-2019.03.17,14.40.41.tar.gz
10. Specifying backup_file is optional. If omitted, Apigee uses the most recent file
in /opt/apigee/backup/component_name.
The restore command re-applies the backed up configuration and restores
the data from when the backup took place.
11. Restart the component, as the following example shows:
12. /opt/apigee/apigee-service/bin/apigee-service component_name start
NOTE: There is no "restore all" command. You must restore each component individually.
If you have to re-install the component see How to Reinstall and Restore
Components.
Apache ZooKeeper
Restore one standalone node
1. Remove old ZooKeeper directories:
/opt/apigee/data/apigee-zookeeper
2. /opt/apigee/etc/apigee-zookeeper.d
3. Restore ZooKeeper data from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restore
backup-2016.03.17,14.40.41.tar.gz
5. Restart all components to establish synchronization with the new restored
ZooKeeper.
1. If a single ZooKeeper node fails, that is part of an ensemble, you can create a
new node with the same hostname/IP address (follow the re-install steps
mentioned in How to Reinstall and Restore Components) and when it joins the
ZooKeeper ensemble it will get the latest snapshots from the Leader and start
to serve clients. You do not need to restore data in this instance.
Apache Cassandra
Restore one standalone node
1. If a single Cassandra node fails, that is part of an ensemble, you can create a
new node with the same hostname/IP address (follow the re-install steps
mentioned in How to Reinstall and Restore Components). You only need to re-
install Cassandra, you do not need to restore the data.
When performing a restore on a non-seed node, ensure that at least one
Cassandra seed node is up.
After installing Cassandra, and the node is up, (given that RF>=2 for all
keyspaces) execute the following nodetool command to initialize the node:
2. /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h
localhost repair -pr
3. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
PostgreSQL database
PosgreSQL running standalone or as Master
1. Stop the Management Server, Qpid Server, and Postgres Server on all
nodes:Note: Your system can still handle requests to API proxies while these
components are stopped.
1. Reconfigure PostgreSQL database using the same config file you used to
install it:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup -f
configFile
3. Start PostgreSQL:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql start
Postgres Server
2. /opt/apigee/etc/edge-postgres-server.d
3. Restore Postgres Server from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restore
backup-2016.03.17,14.40.41.tar.gz
5. Start Postgres Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server start
Qpidd database
2. /opt/apigee/etc/apigee-qpidd.d
3. Restore Qpidd:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd restore backup-
2016.03.17,14.40.41.tar.gz
5. Start Qpidd:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd start
Qpid Server
2. /opt/apigee/etc/edge-qpid-server.d
3. Restore Qpid Server from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restore
backup-2016.03.17,14.40.41.tar.gz
5. Start Qpid Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server start
OpenLDAP
1. Remove old OpenLDAP directories:
/opt/apigee/data/apigee-openldap
2. /opt/apigee/etc/apigee-openldap.d
3. Restore OpenLDAP from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap restore
2016.03.17,14.40.41
5. Restart OpenLDAP:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap start
Management Server
2. /opt/apigee/etc/edge-management-server.d
3. Restore Management Server from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restore backup-2016.03.17,14.40.41.tar.gz
5. Restart the Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server start
Message Processor
2. /opt/apigee/etc/edge-message-processor.d
3. Restore Message Processor from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restore backup-2016.03.17,14.40.41.tar.gz
5. Restart Message Processor:
6. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor start
Router
2. /opt/apigee/etc/edge-router.d
3. Restore Router from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-router restore backup-
2016.03.17,14.40.41.tar.gz
5. Restart Router:
6. /opt/apigee/apigee-service/bin/apigee-service edge-router start
Edge UI
2. /opt/apigee/etc/edge-ui.d
3. Restore UI from the backup file:
4. /opt/apigee/apigee-service/bin/apigee-service edge-ui restore backup-
2016.03.17,14.40.41.tar.gz
5. Restart UI:
6. /opt/apigee/apigee-service/bin/apigee-service edge-ui start
NOTE: These procedures all assume that you are re-installing the component onto a node with
the same IP address or DNS name as the node that failed. For Cassandra, you can only use IP
addresses.
Apache ZooKeeper
Restore one standalone node
1. Stop ZooKeeper:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper stop
3. Remove old ZooKeeper directories:
/opt/apigee/data/apigee-zookeeper
4. /opt/apigee/etc/apigee-zookeeper.d
5. Re-install ZooKeeper:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper install
7. Restore ZooKeeper:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart all components:
11. /opt/apigee/apigee-service/bin/apigee-all restart
If a single ZooKeeper node fails that is part of an ensemble, you can create a new
node with the same hostname/IP address and re-install ZooKeeper. When the new
ZooKeeper node joins the ZooKeeper ensemble it will get the latest snapshots from
the Leader and start to serve clients. You do not need to restore data in this instance.
1. Re-install ZooKeeper:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper install
3. Run setup on the ZooKeeper node using the same config file used when
installing the original node:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper setup -f
configFile
5. Start ZooKeeper:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper start
Apache Cassandra
Restore one standalone node
1. Stop Cassandra:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra stop
3. Remove old Cassandra directory:
4. /opt/apigee/data/apigee-cassandra
5. Re-install Cassandra:
6. /apigee/apigee-service/bin/apigee-service apigee-cassandra install
7. Restore Cassandra:
8. /apigee/apigee-service/bin/apigee-service apigee-cassandra restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart all components:
If a single Cassandra node fails, that is part of an ensemble, you can create a new
node with the same hostname/IP address. You only need to re-install Cassandra, you
do not need to restore the data.
NOTE: When performing a re-install on a non-seed node, ensure that at least one Cassandra seed
node is up.
1. Re-install Cassandra:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra install
3. Run setup on the Cassandra node using the same config file used when
installing the original node:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra setup -f
configFile
5. Start Cassandra:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra start
7. After installing Cassandra, and the node is up, (given that RF>=2 for all
keyspaces) execute the following nodetool command to initialize the node:
PostgreSQL database
PosgreSQL running standalone or as Master
1. Stop the Management Server, Qpid Server, and Postgres Server on all
nodes:NOTE: Your system can still handle requests to API proxies while these
components are stopped.
Postgres Server
1. Stop Postgres Server on all master and standby nodes:
2. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop
3. Remove old Postgres Server directories:
4. /opt/apigee/data/edge-postgres-server /opt/apigee/etc/edge-postgres-
server.d
5. Re-install Postgres Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server install
7. Restore Postgres Server from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-postgre-server restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Start Postgres Server on all master and standby nodes:
11. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server start
Qpid Server and Qpidd
1. Stop Qpidd, Qpid Server, and Postgres Server on all nodes:
4. /opt/apigee/etc/apigee-qpidd.d
5. Re-install Qpidd:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd install
7. Restore Qpidd:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Start Qpidd:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd start
12. Re-install Qpid Server:
13. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server install
14. Restore Qpid Server:
15. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restore
2019.03.17,14.40.41
16. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
17. Restart Qpid Server, Qpidd, and Postgres Servers on all nodes:
/opt/apigee/apigee-service/bin/apigee-service apigee-qpidd restart
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
OpenLDAP
1. Stop OpenLDAP:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap stop
3. Re-install OpenLDAP:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap install
5. Remove old OpenLDAP directories:
6. /opt/apigee/data/apigee-openldap /opt/apigee/etc/apigee-openldap.d
7. Restore OpenLDAP:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap restore
2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart OpenLDAP:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-openldap start
12. Restart all Management Servers:
13. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
Management Server
1. Stop Management Server:
2. /opt/apigee/apigee-service/bin/apigee-service edge-management-server stop
3. Remove old Management Server directories:
4. /opt/apigee/data/edge-management-server /opt/apigee/etc/edge-
management-server.d
5. Re-install Management Server:
6. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
install
7. Restore Management Server from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restore 2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart Management Server:
11. /opt/apigee/apigee-service/bin/apigee-service edge-management-server start
Message Processor
1. Stop Message Processor:
2. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor stop
3. Remove old Message Processor directories:
/opt/apigee/data/edge-message-processor
4. /opt/apigee/etc/edge-message-processor.d
5. Re-install Message Processor:
6. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
install
7. Restore Message Processor from the backup file:
8. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor
restore 2019.03.17,14.40.41
9. Note that when restoring a component, you do not specify the directory path
to the backup file, nor do you specify the "backup-" prefix or the ".tar.gz" suffix.
You only specify the date/time part of the backup file's name.
You can optionally omit the backup file in the restore command and Edge
will use the most recent backup file in the component's backup directory.
10. Restart Message Processor:
11. /opt/apigee/apigee-service/bin/apigee-service edge-message-processor start
Router
1. Stop Router:
2. /opt/apigee/apigee-service/bin/apigee-service edge-router stop
3. Remove old Router directories:
/opt/apigee/data/edge-router
4. /opt/apigee/etc/edge-router.d
5. Re-install Router:
Edge UI
1. Stop UI:
2. /opt/apigee/apigee-service/bin/apigee-service edge-ui stop
3. Remove old UI directories:
/opt/apigee/data/edge-ui
4. /opt/apigee/etc/edge-ui.d
5. Re-install UI:
MONITORING
What to monitor
Generally in a production setup, there is a need to enable monitoring mechanisms
within an Apigee Edge for Private Cloud deployment. These monitoring techniques
warn the network administrators (or operators) of an error or a failure. Every error
generated is reported as an alert in Apigee Edge. For more information on alerts, see
Monitoring Best Practices.
For ease, Apigee components are classified mainly into two categories:
● Apigee-specific Java Server Services: These include Management Server,
Message Processor, Qpid Server, and Postgres Server.
● Third-party Services: These include NGINX Router, Apache Cassandra,
Apache ZooKeeper, OpenLDAP, PostgresSQL database, and Qpid.
Apigee- Management
specific Server
Java
Services
Message
Processor
Qpid Server
Postgres
Server
Third- Apache
party Cassandra
Services
Apache
ZooKeeper
OpenLDAP
PostgreSQL
database
Qpid
NGINX
Router
In general, after Apigee Edge has been installed, you can perform the following
monitoring tasks to track the performance of an Apigee Edge for Private Cloud
installation.
Processes/application checks
At the process level, you can view important information about all the processes that
are running. For example, these include memory and CPU usage statistics that a
process or application utilizes. For processes like qpidd, postgres postmaster, java
and so on, you can monitor the following:
API-level checks
At the API level, you can monitor whether a server is up and running for frequently
used API calls proxied by Apigee. For example, you can perform API check on the
Management Server, Router, and Message Processor by invoking the following curl
command:
curl http://host:port/v1/servers/self/up
Where host is the IP address of the Apigee Edge component. The port number is
specific to each Edge component. For example:
● Router: 8081
● Message Processor: 8082
● etc.
See the individual sections below for information on running this command for each
component
This call returns the "true" and "false". For best results, you can also issue API calls
directly on the backend (with which Apigee software interacts) in order to quickly
determine whether an error exists within the Apigee software environment or on the
backend.
You can collect data from Routers and Message Processors about message flow
pattern/statistics. This allows you to monitor the following:
● Number of active clients
● Number of responses (10X, 20X, 30X, 40X and 50X)
● Connect failures
This helps you to provide dashboards for the API message flow. For more, see How
to Monitor.
You can monitor the Router log file for these messages. For example, if the Router
takes a Message Processor out of rotation, it writes message to the log in the form:
Where MP_IP:PORT is the IP address and port number of the Message Processor.
If later the Router performs a health check and determines that Message Processor
is functioning properly, the Router automatically puts the Message Processor back
into rotation. The Router also writes a "Mark Up" message to the log in the form:
Overview
Edge supports several ways for getting details about services as well as checking
their statuses. The following table lists the types of checks you can perform on each
eligible service:
Mgmt API
Management
Server
Message
Processor
Router
Qpid
Postgres
* Before you can use JMX, you must enable it, as described in Enable JMX.
** The apigee-monit service checks if a component is up and will attempt to restart it if it isn't. For more information, see Self healing with
apigee-monit.
JMX Management
Component Configuration file location
Port API Port
Enable JMX
Authentication in JMX
Edge for Private Cloud supports password based authentication using details stored
in files. You can store passwords as a Hash for added security.
2. conf_system_jmxremote_password_file=/opt/apigee/customer/application/
management-server/jmxremote.password
3. Save the configuration file and make sure it is owned by apigee:apigee.
4. Create a SHA256 hash of the password:
5. echo -n '' | openssl dgst -sha256
6. Create a jmxremote.password file with JMX user credentials:
a. Copy the following files from your $JAVA_HOME directory to the
directory /opt/apigee/customer/application/<component>/:
b. cp ${JAVA_HOME}/lib/management/jmxremote.password.template
$APIGEE_ROOT/customer/application/management-server/jmxremote
.password
c. Edit the file and add your JMX username and password using the
following syntax:
d. USERNAME <HASH-PASSWORD>
e. Make sure the file is owned by apigee and that the file mode is 400:
chown apigee:apigee
$APIGEE_ROOT/customer/application/management-server/jmxremote.password
chown apigee:apigee
$APIGEE_ROOT/customer/application/management-server/jmxremote.password
SSL in JMX
2. conf_system_javax_net_ssl_keystorepassword=<keystore-password>
3. Save the configuration file and make sure it is owned by apigee:apigee.
4. Prepare a keystore containing the server key and place it at the path provided
in the configuration conf_system_javax_net_ssl_keystore above.
Ensure the keystore file is readable by apigee:apigee.
5. Restart the appropriate Edge component:
6. apigee-service edge-management-server restart
One line can be added that “jconsole will need to be started with truststore and
truststore password if SSL is enabled for JMX.” Reference:
https://docs.oracle.com/javase/8/docs/technotes/guides/management/
jconsole.html
Use JConsole (a JMX compliant tool) to manage and monitor health check and
process statistics. With JConsole, you can consume JMX statistics exposed by your
servers and display them in a graphical interface. For more information, see Using
JConsole.
You need to start JConsole with truststore and truststore password if SSL is enabled
for JMX. See Using JConsole.
JConsole uses the following service URL to monitor the JMX attributes (MBeans)
offered via JMX:
service:jmx:rmi:///jndi/rmi://IP_address:port_number/jmxrmi
Where:
For example, to monitor the Management Server, issue a command like the following
(assuming the server's IP address is 216.3.128.12):
service:jmx:rmi:///jndi/rmi://216.3.128.12:1099/jmxrmi
Note that this example specifies port 1099, which is the Management Server JMX
port. For other ports, see JMX and Management API monitoring ports.
Memory HeapMemoryUsage
NonHeapMemoryUsage
Usage
NOTE
Attribute values are displayed in four values: committed, init, max, and used.
The following sections describe changes you might need to make to Edge
component configuration files for JMX related configurations. See Monitoring ports
and configuration files for more information.
The Management API provides several endpoints for monitoring and diagnosing
issues with your services. These endpoints include:
Endpoint Description
/ Checks to see if a service is running. This API call does not require you
servers to authenticate.
/self/ If the service is running, this endpoint returns the following response:
up
<ServerField>
<Up>true</Up>
</ServerField>
If the service is not running, you will get a response similar to the
following (depending on which service it is and how you checked it):
To use these endpoints, invoke a utility such as curl with commands that use the
following syntax:
Where:
● host is the IP address of the server you want to check. If you are logged into
the server, you can use "localhost"; otherwise, specify the IP address of the
server as well as the username and password.
● port_number is the Management API port for the server you want to check.
This is a different port for each type of component. For example, the
Management Server's Management API port is 8080. For a list of
Management API port numbers to use, see JMX and Management API
monitoring ports
To change the format of the response, you can specify the Accept header as
"application/json" or "application/xml".
The following example gets the status of the Router on localhost (port 8081):
You can use the Management API to monitor user, organization, and deployment
status of your proxies on Management Servers and Message Processors by issuing
the following commands:
This call requires you to authenticate with your system administration username and
password.
The server should return a "deployed" status for all calls. If these fail, do the
following:
1. Check the server logs for any errors. The logs are located at:
● Management Server: opt/apigee/var/log/edge-management-
server
● Message Processor: opt/apigee/var/log/edge-message-
processor
2. Make a call against the server to check whether it is functioning properly.
3. Remove the server from the ELB and then restart it:
4. /opt/apigee/apigee-service/bin/apigee-service service_name restart
5. Where service_name is:
● edge-management-server
● edge-message-processor
Postgres monitoring
Postgres supports several utilities that you can use to check its status. These
utilities are described in the sections that follow.
You can check for organization and environment names that are onboarded on the
Postgres Server by issuing the following curl command:
curl -v http://postgres_IP:8084/v1/servers/self/organizations
You can verify the status of the Postgres and Qpid analytics servers by issuing the
following curl command:
curl -u userEmail:password
http://host:port_number/v1/organizations/orgname/environments/envname/
provisioning/axstatus
The system should display a success status for all analytics servers, as the following
example shows:
{
"environments" : [ {
"components" : [ {
"message" : "success at Thu Feb 28 10:27:38 CET 2013",
"name" : "pg",
"status" : "SUCCESS",
"uuid" : "[c678d16c-7990-4a5a-ae19-a99f925fcb93]"
}, {
"message" : "success at Thu Feb 28 10:29:03 CET 2013",
"name" : "qs",
"status" : "SUCCESS",
"uuid" : "[ee9f0db7-a9d3-4d21-96c5-1a15b0bf0adf]"
} ],
"message" : "",
"name" : "prod"
} ],
"organization" : "acme",
"status" : "SUCCESS"
}
PostgresSQL database
This section describes techniques that you can use specifically for monitoring the
Postgres database.
To monitor the PostgresSQL database, you can use a standard monitoring script,
check_postgres.pl. For more information, see
http://bucardo.org/wiki/Check_postgres.
check_postgres.pl output
You can verify that the proper tables are created in PostgresSQL database. Log in to
PostgreSQL database using the following command:
Then run:
\d analytics."org.env.fact"
You can perform API checks on the Postgres machine by invoking the following
curl command:
curl -v http://postgres_IP:8084/v1/servers/self/health
NOTE
Ensure that you use port 8084.
This command returns the ACTIVE status when postgres process is active. If the
Postgres process is not up and running, it returns the INACTIVE status.
Postgres resources
For additional information about monitoring the Postgres service, see the following:
● http://www.postgresql.org/docs/9.0/static/monitoring.html
● http://www.postgresql.org/docs/9.0/static/diskusage.html
● http://bucardo.org/check_postgres/check_postgres.pl.html
Apache Cassandra
JMX is enabled by default for Cassandra and remote JMX access to Cassandra does
not require a password.
You can enable JMX authentication for Cassandra. After doing so, you will then be
required to pass a username and password to all calls to the nodetool utility.
export CASS_JMX_USERNAME=JMX_USERNAME
d. export CASS_JMX_PASSWORD=JMX_PASSWORD
e. Save the jmx_auth.sh file.
f. Source the file:
g. source /opt/apigee/customer/application/jmx_auth.sh
4. Copy and edit the jmxremote.password file:
a. Copy the following file from your $JAVA_HOME directory to
/opt/apigee/customer/application/apigee-cassandra/:
b. cp ${JAVA_HOME}/lib/management/jmxremote.password.template
$APIGEE_ROOT/customer/application/apigee-cassandra/jmxremote.p
assword
c. Edit the jmxremote.password file and add your JMX username and
password using the following syntax:
d. JMX_USERNAME JMX_PASSWORD
e. Where JMX_USERNAME and JMX_PASSWORD are the JMX username
and password you set previously.
f. Make sure the file is owned by "apigee" and that the file mode is 400:
chown apigee:apigee
/opt/apigee/customer/application/apigee-cassandra/jmxremote.password
g. chmod 400
/opt/apigee/customer/application/apigee-cassandra/jmxremote.password
5. Copy and edit the jmxremote.access file:
a. Copy the following file from your $JAVA_HOME directory to
/opt/apigee/customer/application/apigee-cassandra/:
cp ${JAVA_HOME}/lib/management/jmxremote.access
b. $APIGEE_ROOT/customer/application/apigee-cassandra/jmxremote.access
c. Edit the jmxremote.access file and add the following role:
d. JMX_USERNAME readwrite
e. Make sure the file is owned by "apigee" and that the file mode is 400:
chown apigee:apigee
/opt/apigee/customer/application/apigee-cassandra/jmxremote.access
f. chmod 400
/opt/apigee/customer/application/apigee-cassandra/jmxremote.access
6. Run configure on Cassandra:
7. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra configure
8. Restart Cassandra:
9. /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra restart
10. Repeat this process on all other Cassandra nodes.
Enabling JMX with SSL provides additional security and encryption for JMX-based
communication with Cassandra. To enable JMX with SSL, you need to provide a key
and a certificate to Cassandra to accept SSL-based JMX connections. You also need
to configure nodetool (and any other tools that communicate with Cassandra over
JMX) for SSL.
To enable JMX with SSL for Cassandra, use the following procedure:
chown apigee:apigee
/opt/apigee/customer/application/apigee-cassandra/keystore.node1
● chmod 400 /opt/apigee/customer/application/apigee-cassandra/keystore.node1
6. Configure Cassandra for JMX with SSL by doing the following steps:
● Stop the Cassandra node by entering
● apigee-service apigee-cassandra stop
● Enable SSL in Cassandra by opening the file
/opt/apigee/customer/application/cassandra.propertie
s and adding the following lines:
conf_cassandra_env_com.sun.management.jmxremote.ssl=true
conf_cassandra_env_javax.net.ssl.keyStore=/opt/apigee/customer/application/
apigee-cassandra/keystore.node1
● conf_cassandra_env_javax.net.ssl.keyStorePassword=keystore-
password
● Change the owner of the file to apigee:apigee, as the following example
shows:
● chown apigee:apigee
/opt/apigee/customer/application/cassandra.properties
● Run configure on Cassandra:
● /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra
configure
● Restart Cassandra:
● /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra
restart
● Repeat this process on all other Cassandra nodes.
● Start the Cassandra node by entering
● apigee-service apigee-cassandra start
7. Configure the apigee-service Cassandra commands. You need to set
certain environment variables while running apigee-service commands,
including the ones below:
apigee-service apigee-cassandra stop
apigee-service apigee-cassandra wait_for_ready
apigee-service apigee-cassandra ring
15. -Dcom.sun.management.jmxremote.registry.ssl=true
16. Make sure the trustore file is readable by Apigee user.
Run the following apigee-service command. If it runs without error, your
configurations are correct.
17. apigee-service apigee-cassandra ring
18. Option 2 (SSL arguments stored in environment variables)
Set the following environment variables:
export CASS_JMX_USERNAME=ADMIN
# Provide encrypted password here if you have setup JMX password encryption
export CASS_JMX_PASSWORD=PASSWORD
export CASS_JMX_SSL=Y
# Ensure the truststore file is accessible by Apigee user.
export CASS_JMX_TRUSTSTORE=<path-to-trustore.node1>
28. -Dcom.sun.management.jmxremote.registry.ssl=true
29. The truststore path specified above should be accessible by any user running
nodetool.
Run nodetool with the --ssl option.
30. /opt/apigee/apigee-cassandra/bin/nodetool --ssl -u <jmx-user-name> -pw
<jmx-user-password> -h localhost ring
31. Configuration option 2
Run nodetool as a single command with the extra parameters listed below.
32. /opt/apigee/apigee-cassandra/bin/nodetool -Djavax.net.ssl.trustStore=<path-
to-truststore.node1> -Djavax.net.ssl.trustStorePassword=<truststore-
password> -Dcom.sun.management.jmxremote.registry.ssl=true -
Dssl.enable=true -u <jmx-user-name> -pw <jmx-user-password> -h localhost
ring
If you need to revert the SSL configurations described in the procedure above, do the
following steps:
5. # JVM_OPTS="$JVM_OPTS -
Dcom.sun.management.jmxremote.registry.ssl=true”
6. Start apigee-cassandra by entering
7. apigee-service apigee-cassandra start
8. Remove the environment variable CASS_JMX_SSL if it was set.
9. unset CASS_JMX_SSL
10. Check that apigee-service based commands like ring, stop, backup,
and so on, are working.
11. Stop using the --ssl switch with nodetool
Use JConsole and the following service URL to monitor the JMX attributes (MBeans)
offered via JMX:
service:jmx:rmi:///jndi/rmi://IP_address:7199/jmxrmi
ColumnFamilies/apprepo/environments PendingTasks
ColumnFamilies/apprepo/organizations
MemtableColumnsCount
ColumnFamilies/apprepo/apiproxy_revisions
MemtableDataSize
ColumnFamilies/apprepo/apiproxies
ColumnFamilies/audit/audits ReadCount
ColumnFamilies/audit/audits_ref
RecentReadLatencyMicros
TotalReadLatencyMicros
WriteCount
RecentWriteLatencyMicros
TotalWriteLatencyMicros
TotalDiskSpaceUsed
LiveDiskSpaceUsed
LiveSSTableCount
BloomFilterFalsePositives
RecentBloomFilterFalseRatio
BloomFilterFalseRatio
The nodetool utility is a command line interface for Cassandra that manages cluster
nodes. The utility can be found at /opt/apigee/apigee-cassandra/bin.
1. General ring info (also possible for single Cassandra node): Look for the "Up"
and "Normal" for all nodes.
2. nodetool [-u username -pw password] -h localhost ring
3. You only need to pass your username and password if you enabled JMX
authentication for Cassandra.
The output of the above command looks as shown below:
4. Datacenter: dc-1
5. ==========
6. Address Rack Status State Load Owns Token
192.168.124.201 ra1 Up Normal 1.67 MB 33,33% 0
192.168.124.202 ra1 Up Normal 1.68 MB 33,33% 5671...5242
192.168.124.203 ra1 Up Normal 1.67 MB 33,33% 1134...0484
7. General info about nodes (call per node)
8. nodetool [-u username -pw password] -h localhost info
9. The output of the above command looks like the following:
10. ID : e2e42793-4242-4e82-bcf0-oicu812
11. Gossip active : true
12. Thrift active : true
Native Transport active: true
Load : 273.71 KB
Generation No : 1234567890
Uptime (seconds) : 687194
Heap Memory (MB) : 314.62 / 3680.00
Off Heap Memory (MB) : 0.14
Data Center : dc-1
Rack : ra-1
Exceptions :0
Key Cache : entries 150, size 13.52 KB, capacity 100 MB, 1520781 hits,
1520923 requests, 1.000 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests,
NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 requests,
NaN recent hit rate, 7200 save period in seconds
Token :0
13. Status of the thrift server (serving client API)
14. nodetool [-u username -pw password] -h localhost statusthrift
15. The output of the above command looks like the following:
16. running
17. Status of data streaming operations: Observe traffic for cassandra nodes:
18. nodetool [-u username -pw password] -h localhost netstats
19. The output of the above command looks like the following:
Cassandra resource
Apache ZooKeeper
Check ZooKeeper status
1. Ensure the ZooKeeper process is running. ZooKeeper writes a PID file to
opt/apigee/var/run/apigee-zookeeper/apigee-zookeeper.pid.
2. Test ZooKeeper ports to ensure that you can establish a TCP connection to
ports 2181 and 3888 on every ZooKeeper server.
3. Ensure that you can read values from the ZooKeeper database. Connect using
a ZooKeeper client library (or
/opt/apigee/apigee-zookeeper/bin/zkCli.sh) and read a value
from the database.
4. Check the status:
5. /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper status
ZooKeeper can be monitored via a small set of commands (four-letter words) that
are sent to the port 2181 using netcat (nc) or telnet.
For more info on ZooKeeper commands, see: Apache ZooKeeper command
reference.
For example:
You can also monitor the OpenLDAP caches, which help in reducing the number of
disk accesses and hence improve the performance of the system. Monitoring and
then tuning the cache size in the OpenLDAP server can heavily impact the
performance of the directory server. You can view the log files
(opt/apigee/var/log) to obtain information about cache.
NOTE: apigee-monit is not supported on all platforms. Before continuing, check the list of
compatible platforms.
To use apigee-monit, you must install it manually. It is not part of the standard
installation.
WARNING
apigee-monit attempts to restart a component every time the component fails a check. A
component fails a check when it is disabled, but also when you purposefully stop it, such as
during an upgrade, installation, or any task that requires you to stop a component.
As a result, you must stop monitoring the component before executing any procedure that
requires you to install, upgrade, stop, or restart a component.
Quick start
This section shows you how to quickly get up and running with apigee-monit.
If you are using Amazon Linux, first install Fedora. Otherwise, skip this step.
Install apigee-monit
cat /opt/apigee/var/log/apigee-monit/apigee-monit.log
Each of these topics and others are described in detail in the sections that follow.
About apigee-monit
apigee-monit helps ensure that all components on a node stay up and running. It
does this by providing a variety of services, including:
apigee-monit architecture
During your Apigee Edge for Private Cloud installation and configuration, you
optionally install a separate instance of apigee-monit on each node in your
cluster. These separate apigee-monit instances operate independently of one
another: they do not communicate the status of their components to the other
nodes, nor do they communicate failures of the monitoring utility itself to any central
service.
Supported platforms
apigee-monit supports the following platforms for your Private Cloud cluster. (The
supported OS for apigee-monit depends on the version of Private Cloud.)
RedHat Enterprise 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8,
Linux (RHEL) 7.7 7.9
Oracle Linux 6.9*, 7.4, 7.5, 7.6, 7.5, 7.6, 7.7 7.5, 7.6, 7.7, 7.8
7.7
* While not technically supported, you can install and use apigee-monit on CentOS/RHEL/Oracle version 6.9 for Apigee Edge for
Component configurations
TERMINOLOGY
A component configuration specifies the service to check and the action to take in the event of a
failure. Component configurations can also define how often to check a service and the number
of times to retry a service before triggering a failure response.
By default, apigee-monit monitors all Edge components on a node using their pre-
defined component configurations. To view the default settings, you can look at the
apigee-monit component configuration files. You cannot change the default
component configurations.
Configuration
Component What is monitored
location
Router /opt/apigee/
edge-router/
monit/
default.conf
The following example shows the default component configuration for the edge-
router component:
The following example shows the default configuration for the Classic UI (edge-ui)
component:
This applies to the Classic UI, not the new Edge UI whose component name is edge-
management-ui.
You cannot change the default component configurations for any Apigee Edge for
Private Cloud component. You can, however, add your own component
configurations for external services, such as your target endpoint or the httpd
service. For more information, see Non-Apigee component configurations.
Install apigee-monit
apigee-monit is not installed by default; you can install it manually after upgrading
or installing version 4.19.01 or later of Apigee Edge for Private Cloud.
This can cause a problem if you want to purposefully stop a component. For
example, you might want to stop a component when you need to back it up or
upgrade it. If apigee-monit restarts the service during the backup or upgrade, your
maintenance procedure can be disrupted, possibly causing it to fail.
TIP
To prevent apigee-monit from causing failures during a planned outage, stop monitoring the
component with the stop-component option when you perform the following operations:
● Stop a component
● Install or upgrade a component
● Restart a component
The following sections show the options for stopping the monitoring of components.
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
Note that "all" is not a valid option for stop-component. You can stop and
unmonitor only one component at a time with stop-component.
To re-start the component and resume monitoring, execute the following command:
For instructions on how to stop and unmonitor all components, see Stop all
components and unmonitor them.
To unmonitor a component (but don't stop it), execute the following command:
● apigee-cassandra (Cassandra)
● apigee-openldap (OpenLDAP)
● apigee-postgresql (PostgreSQL database)
● apigee-qpidd (Qpidd)
● apigee-sso (Edge SSO)
● apigee-zookeeper (ZooKeeper)
● edge-management-server (Management Server)
● edge-management-ui (new Edge UI)
● edge-message-processor (Message Processor)
● edge-postgres-server (Postgres Server)
● edge-qpid-server (Qpid Server)
● edge-router (Edge Router)
● edge-ui (Classic UI)
To unmonitor all components (but don't stop them), execute the following command:
To stop all components and unmonitor them, execute the following commands:
/opt/apigee/apigee-service/bin/apigee-all start
/opt/apigee/apigee-service/bin/apigee-service apigee-monit monitor -c all
Stop apigee-monit
NOTEIf you use cron to watch apigee-monit, you must disable it before stopping apigee-
monit, as described in Monitor apigee-monit.
Start apigee-monit
Disable apigee-monit
You can suspend monitoring all components on the node by using the following
command:
To uninstall apigee-monit:
1. If you set up a cron job to monitor apigee-monit, remove the cron job
before uninstalling apigee-monit:
2. sudo rm /etc/cron.d/apigee-monit.cron
3. Stop apigee-monit with the following command:
4. /opt/apigee/apigee-service/bin/apigee-service apigee-monit stop
5. Uninstall apigee-monit with the following command:
6. /opt/apigee/apigee-service/bin/apigee-service apigee-monit uninstall
7. Repeat this procedure on each node in your cluster.
Customize apigee-monit
You can customize various apigee-monit settings, including:
You can customize the default apigee-monit control settings such as the
frequency of status checks and the locations of the apigee-monit files. You do
this by editing a properties file using the code with config technique. Properties files
will persist even after you upgrade Apigee Edge for Private Cloud.
The following table describes the default apigee-monit control settings that you
can customize:
Property Description
conf_mo The httpd daemon's port. apigee-monit uses httpd for its
nit_htt dashboard app and to enable reports/summaries. The default value is
pd_port 2812.
conf_mo The amount of time that apigee-monit waits after it is first loaded
nit_mon into memory before it runs. This affects apigee-monit the first
it_dela process check only.
y_time
conf_mo The location of the PID and state files, which apigee-monit uses for
nit_mon checking processes.
it_rund
ir
You can define global configuration settings for apigee-monit; for example, you
can add email notifications for alerts. You do this by creating a configuration file in
the /opt/apigee/data/apigee-monit directory and then restarting apigee-
monit.
SET MAIL-FORMAT {
from: edge-alerts@example.com
subject: Monit Alert -- Service: $SERVICE $EVENT on $HOST
}
SET ALERT fred@example.com
7. SET ALERT nancy@example.com
8. For a complete list of global configuration options, see the monit
documentation.
9. Save your changes to the component configuration file.
10. Reload apigee-monit with the following command:
11. /opt/apigee/apigee-service/bin/apigee-service apigee-monit reload
12. If apigee-monit does not restart, check the log file for errors as described
in Access apigee-monit log files.
13. Repeat this procedure for each node in your cluster.
You can add your own configurations to apigee-monit so that it will check
services that are not part of Apigee Edge for Private Cloud. For example, you can use
apigee-monit to check that your APIs are running by sending requests to your
target endpoint.
Note that this is for non-Edge components only. You cannot customize the
component configurations for Edge components.
Access apigee-monit log files
apigee-monit logs all activity, including events, restarts, configuration changes,
and alerts in a log file.
/opt/apigee/var/log/apigee-monit/apigee-monit.log
You can change the default location by customizing the apigee-monit control
settings.
You cannot customize the format of the apigee-monit log file entries.
Command Usage
Each of these commands is explained in more detail in the sections that follow.
report
The report command gives you a rolled-up summary of how many components are
up, down, currently being initialized, or are currently unmonitored on a node. The
following example invokes the report command:
You may get a Connection refused error when you first execute the report
command. In this case, wait for the duration of the
conf_monit_monit_delay_time property, and then try again.
summary
The summary command lists each component and provides its status. The following
example invokes the summary command:
If you get a Connection refused error when you first execute the summary
command, try waiting the duration of the conf_monit_monit_delay_time
property, and then try again.
Monitor apigee-monit
It is best practice to regularly check that apigee-monit is running on each node.
To check that apigee-monit is running, use the following command:
Apigee recommends that you issue this command periodically on each node that is
running apigee-monit. One way to do this is with a utility such as cron that
executes scheduled tasks at pre-defined intervals.
If you want to stop or temporarily disable apigee-monit, you must disable this
cron job, too, otherwise cron will restart apigee-monit.
Set a threshold after which an alert needs to be generated. What you set depends on
your hardware configuration. Threshold should be set in relation to your capacity. For
example, Apigee Edge might be too low if you only have 6GB capacity. You can
assign threshold with equal to (=) or greater than (>) criterion. You can also specify a
time interval between two consecutive alerts generation. You can use the
hours/minutes/seconds option.
● Port 4526, 4527 and 4528 on Management Server, Router and Message
Processor
● Port 1099, 1100 and 1101 on Management Server, Router and Message
Processor
● Port 8081 and 15999 on Routers
● Port 8082 and 8998 on Message Processors
● Port 8080 on Management Server
In order to determine which port each Apigee component is listening for API calls on,
issue the following API calls to the Management Server (which is generally on port
8080):
"externalHostName" : "localhost",
"externalIP" : "111.222.333.444",
"internalHostName" : "localhost",
"internalIP" : "111.222.333.444",
"isUp" : true,
"pod" : "gateway",
"reachable" : true,
"region" : "default",
"tags" : {
"property" : [ {
"name" : "Profile",
"value" : "Router"
}, {
"name" : "rpc.port",
"value" : "4527"
}, {
"name" : "http.management.port",
"value" : "8081"
}, {
"name" : "jmx.rmi.port",
"value" : "1100"
}]
},
"type" : [ "router" ],
"uUID" : "2d4ec885-e20a-4173-ae87-10be38b35750"
Viewing Logs
Log files keep track of messages regarding the event/operation of the system.
Messages appear in the log when processes begin and complete or when an error
condition occurs. By viewing log files, you can obtain information about system
components, for example, CPU, memory, disk, load, processes, so on, before and
after attaining a failed state. This also allows you to identify and diagnose the source
of current system problems or help you predict potential system problems.
For example, a typical system log of a component contains following entries as seen
below:
Memory
Threading
PeakThreadCount = 53 ; ThreadCount = 53 ;
OperatingSystem
SystemLoadAverage = 0.25 ;
By default, the logging mechanism checks for changes every minute. If you omit the
time units to the scanPeriod attribute, it defaults to milliseconds.
Note: Setting the scanPeriod attribute to a short interval might affect system performance.
The following table tells the log files location of Apigee Edge Private Cloud
components.
Components Location
Router opt/apigee/var/log/edge-router
Edge UI opt/apigee/var/log/edge-ui
ZooKeeper opt/apigee/var/log/apigee-
zookeeper
OpenLDAP opt/apigee/var/log/apigee-
openldap
Cassandra opt/apigee/var/log/apigee-
cassandra
Qpidd opt/apigee/var/log/apigee-qpidd
● Stop monitoring a component before you perform any operation that starts or
stops it such as a backup or an upgrade.
● Monitor apigee-monit by using a tool such as cron. For more information,
see Monitor apigee-monit.
Monitoring Tools
Monitoring tools such as Nagios, Collectd, Graphite, Splunk, Sumologic, and Monit
can help you monitor your entire enterprise environment and business processes.
PORTAL
Portal overview
Support for Apigee modules for Drupal 7-based developer portals will end on January 2, 2023.
While Drupal 7-based portals will continue to work, Apigee will not issue security patches for its
Drupal 7 modules or send PHP or Drupal 7 updates to OPDK customers after January 2, 2023.
Because Apigee Drupal 7 modules depend on PHP 7.4, you may also be
affected by the impending end of security patches for PHP 7.4 at the end of
November, 2022. Drupal 7 will reach its end-of-life date on November 23, 2023.
We encourage you to upgrade to Drupal 9. Please check with your team and
evaluate your upgrade options to avoid any potential security risks. For more
information on moving to Drupal 9, see Build your portal using Drupal 9.
1 NODE
2 NODES
Figure 1
Portal Topology
Note that:
Portal requirements
Note: Do not install the portal on the same servers as Apigee Edge.
Apigee Developer Services portal (or simply, the portal) requires the
following minimal hardware and software:
Hardware Requirement
CPU 2-core
RAM 4 GB
The Red Hat requirements are checked during the install and the
portal installer prompts you if RHEL is not already registered. If you
already have Red Hat login credentials, you can use the following
command to register RHEL before beginning the installation process:
subscription-manager register --username=username --
password=password --auto-attach
If you have a trial version of RHEL, you can obtain a 30-day trial
license. See https://access.redhat.com/solutions/32790 for more
information.
SMTP requirements
Additional requirements
The portal has a single interface with the Apigee Management Server
via a REST API in order to store and retrieve information about a
user's applications. The portal must be able to connect to the
Management Server via HTTP or HTTPS, depending on your
installation.
Information required before you start the
install
Before starting the install, you must have the following information
available:
Before you install Apigee Developer Services portal (or simply, the
portal), ensure that:
1. You install Postgres before installing the portal. You can either
install Postgres as part of installing Edge, or install Postgres
standalone for use by the portal.
● If you install Postgres standalone, it can be on the
same node as the portal.
● If you are connecting to Postgres installed as part of
Edge, and Postgres is configured in master/standby
mode, specify the IP address of the master Postgres
server.
2. You are performing the install on the 64-bit version of a
supported version of Red Hat Enterprise Linux, CentOS, or
Oracle. See the list of supported versions at Supported
software and supported versions.
3. Yum is installed.
Installation overview
NOTE
Do not install the portal on the same servers as Apigee Edge.
To install the portal, you will perform the following steps. Each of
these steps is described in more detail in the sections that follow.
NOTE
or:
curl -u EMAIL:PASSWORD https://ms_IP_or_DNS:TLSPort/v1/organizations/
Where EMAIL and PASSWORD are the email address and password
of the administrator for ORGNAME.
{
"createdAt" : 1348689232699,
"createdBy" : "USERNAME",
"displayName" : "cg",
"environments" : [ "test", "prod" ],
"lastModifiedAt" : 1348689232699,
"lastModifiedBy" : "foo@bar.com",
"name" : "cg",
"properties" : {
"property" : [ ]
},
"type" : "trial"
}
The following packages present on your system conflict with software we are
about to install. You will need to manually remove each one, then re-run this install script.
php
php-cli
php-common
php-gd
php-mbstring
php-mysql
php-pdo
php-pear
php-pecl-apc
php-process
php-xml
Note: To remove PHP on CentOS 6.6, you might also have to remove Ganglia, if it is installed.
If you are not sure if PHP is installed on your server, use the following
command:
Note that the portal uses PHP version 4.18.01-0.0.49. This is not
intended to match the version number of Apigee Edge for Private
Cloud.
3. Install Postgres
NOTE
The portal requires Postgres to be installed before you can install the
portal. You can either install Postgres as part of installing Edge, or
install Postgres standalone for use by the portal.
Once enabled, you can see the available updates by using the
Reports > Available Updates menu item. You can also use the
following Drush command:
You have to run this command from the root directory of the site. By
default, the portal is installed at
/opt/apigee/apigee-drupal/wwwroot. Therefore, you should
first change directory to /opt/apigee/apigee-drupal/wwwroot
before running the command. If you did not install the portal in the
default directory, change to your installation directory.
Use the Reports > Available Updates > Settings menu item to
configure the module to email you when updates are available and to
set the frequency for checking for updates.
If you decide to use Solr as your search engine, you must install Solr
locally on your server and then enable and configure the Drupal Solr
modules on the portal.
You must also enable SmartDocs on the portal. For more information
on SmartDocs, see Using SmartDocs to document APIs.
9. Next steps
The following table lists some of the most common tasks that you
perform after installation, and includes links to the Apigee
documentation where you can find more information:
Task Description
Add blog and The portal has built-in support for blogs and
forum posts threaded forums. Define the permissions required
to view, add, edit, and delete blog and forum posts.
● https://codeburst.io/security-for-drupal-
and-related-server-software-3c979b041595
● https://www.drupal.org/node/1992030
● apigee-drupal
● apigee-drupal-contrib
● apigee-drupal-devportal
● apigee-drush
● apigee-php
1. Ensure that the desired port number is open on the Edge node.
2. Open /opt/apigee/customer/application/drupal-
devportal.properties in an editor. If the file and directory
does not exist, create it.
3. Set the following propertty in drupal-
devportal.properties:
4. conf_devportal_nginx_listen_port=PORT
5. Where PORT is the new port number.
6. Save the file.
7. Restart the portal:
8. /opt/apigee/apigee-service/bin/apigee-service apigee-drupal-
devportal restart
Common Drush commands
Drush is the Drupal command line interface. You can use it to
perform many tasks as a Drupal administrator. Drush is installed for
you when you install Apigee Developer Services portal (or simply, the
portal).
You must run Drush commands from the root directory of the portal
site. By default, the portal is installed at:
● /opt/apigee/apigee-drupal/wwwroot (NGINX)
Command Description
However, if necessary, you can configure TLS on the web server that
hosts the portal.
See Using TLS on the portal for an overview of using TLS on the
portal.
You can also change the port number as described in Set the HTTP
port used by the portal.
To configure TLS:
1. Obtain your TLS key and certificate. For this example, the cert
is in a file named server.crt and the key is in server.key.
2. Upload your cert and key to the portal server to
/opt/apigee/customer/nginx/ssl.
If the directory does not exist, create it and change the owner
to the "apigee" user:
mkdir /opt/apigee/customer/nginx/ssl
3. chown apigee:apigee /opt/apigee/customer/nginx/ssl
4. Change the owner of the cert and key to the "apigee" user:
Before you can back up the portal, you must know the name of the
portal's database.
-------------+--------+----------+-------------+-------------+---------------------
apigee | apigee | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
=Tc/apigee +
| | | | | apigee=CTc/apigee +
| | | | | postgres=CTc/apigee
| | | | | apigee=CTc/apigee
| | | | | apigee=CTc/apigee
After you have backed up the portal, you can restore from your
backup using the pg_restore command.
To restore from the backup and create a new database, use the
following command:
You can also restore the backup files to the Drupal web root directory
and the private files.
Upgrade the portal
This procedure describes how to upgrade an existing Apigee
Developer Services portal (or simply, the portal) on-premise
installation.
NOTE: All new installations of the portal from version 4.17.01 onward used Postgres as the database and NGINX
However, previous releases of the portal use MySQL/MariaDB as the database and Apache web server. That mea
might be using one of the following configurations:
Before you start the update process, you must determine your current
configuration and then perform the correct update procedure.
WARNING: You cannot update a portal using MySQL/MariaDB and Apache. You must instead convert the installa
the database and NGINX as the web server. As part of this conversion, you also update your portal to 4.51.00. Se
portal to an RPM-based portal for more.NOTE: This procedure performs a complete upgrade of Drupal and of the
ships with the portal. You might want to upgrade just Drupal itself, possibly because a new version of Drupal is a
version can mean a Drupal feature release, patch, security update, or other type of Drupal update. If you only wan
of Drupal, see Upgrade Drupal.
If you are unsure about your current installation type, use the
following command to determine it:
● ls /opt
● If you are using NGINX/Postgres, you will see the following
directories: /opt/apigee and /opt/nginx.
If you are using Apache/MySQL or Apache/MariaDB, then
these directories should not be present.
● /opt/apigee/apigee-service/bin/apigee-all status
● If you are using NGINX/Postgres, you will see the following
output:
+ apigee-service
apigee-drupal-devportal status
apigee-service: apigee-lb: OK
*:80
● 192.168.56.102 (/etc/httpd/conf/vhosts/devportal.conf:1)
If you did not install the portal in the default directory, modify the
paths in the procedure below to use your installation directory.
http://yourportal.com/buildInfo
NOTE: In 4.17.05 this link was removed. To determine the version, open the Reports > Status report
version appears in the table in the row named Install profile.NOTE: If nothing is displayed or a version other than
displayed, then you cannot use this upgrade process. For information on upgrading from any other portal version
Support.NOTE: If for some reason you removed the buildInfo file, then you can determine the version from th
profiles/apigee/apigee.info file. The default install location is /var/www/html/profiles/apigee/a
might have changed it at install time if you installed the portal in a directory other than
contains a line in the form below containing the version:
version = "OPDK-4.17.05"
ls -l /opt/apigee
/usr/bin/curl http://uName:pWord@remoteRepo:3939/bootstrap_4.51.00.sh
iii. -o /tmp/bootstrap_4.51.00.sh
iv. Where uName and pWord are the username and
password you set above for the repo, and
remoteRepo is the IP address or DNS name of
the repo node.
v. On the remote node, install the Edge apigee-
service utility and dependencies:
/opt/apigee/apigee-drupal/wwwroot
For example:
NOTE: This applies to Private Cloud installations of Drupal only. For customers using a cloud deployment of the
automatically applies all Drupal security updates.
Upgrade Drupal
NOTE: This upgrade is for your Private Cloud installation of Drupal only. For customers using a cloud deploymen
will automatically apply all Drupal security updates.
Before you start the Drupal update, you can determine your current
Drupal version by running the following command from the Drupal
installation folder. By default, Drupal is installed in
/opt/apigee/apigee-drupal/wwwroot:
cd /opt/apigee/apigee-drupal/wwwroot
drush status | grep 'Drupal version'
During the upgrade of Drupal core, the instructions tell you to remove all core
files and directories. However, do not delete the /profiles/apigee
directory because this is where the Apigee Drupal Devportal distribution is
located.
TIP: To be notified about Drupal security releases, subscribe to Drupal security announcements
If you install a new Private Cloud version that includes a more recent
version of the module previously stored in
/sites/all/modules/contrib, remove the module from
/sites/all/modules/contrib. For more information, see
Moving modules and themes (Drupal.org).
The high-level steps that you use to migrate from a tar-based portal
to an RPM-based portal are:
/opt/apigee/apigee-drupal
to:
/opt/apigee/apigee-drupal/wwwroot
cd /var/www/html
$databases = array (
'default' =>
array (
'default' =>
array (
),
),
'custom' =>
array (
'default' =>
array (
7. );
8. Where host and port specify the IP address and port of the
Postgres server. Postgres uses port 5432 for connections.
9. On the tar-based portal, install the Postgres driver:
a. Use Yum to install the driver:
b. yum install php-pdo_pgsql
c. Edit /etc/php.ini to add the following line anywhere
in the file:
d. extension=pgsql.so
e. Restart Apache:
f. service httpd restart
10. On the tar-based portal, migrate the portal database to the
RPM-based portal:
a. Log in to the portal as an admin.
b. Select Structure->Migrator in the Drupal menu.
c. Choose your origin database on the tar-based portal,
default, and the destination database, custom,
based on settings.php file shown above.
d. Click Migrate. The tar-based database is migrated to
the RPM-based database.
11. Copy the sites directory from the tar-based server to the
RPM-based server. The paths shown in the following steps are
based on default paths. Modify them as necessary for your
installation.
a. On the tar-based portal, bundle the
/var/www/html/sites directory:
cd /var/www/html/sites
sudo cp
/opt/apigee/apigee-drupal/wwwroot/sites/default/settings.php
ii.
/opt/apigee/apigee-drupal/wwwroot/sites/defa
ult/settings.bak.php
iii. Backup the existing files directory:
sudo mv /opt/apigee/apigee-drupal/wwwroot/sites/default/files
iv.
/opt/apigee/apigee-drupal/wwwroot/sites/defa
ult/files_old
v. Backup the existing sites directory:
vi. tar -cvzf /tmp/sites_old.tar.gz
/opt/apigee/apigee-drupal/wwwroot/sites
vii. Unzip and untar the sites directory from the
tar-based server:
gunzip /opt/apigee/apigee-drupal/wwwroot/sites/sites.tar.gz
viii.tar -xvf
/opt/apigee/apigee-drupal/wwwroot/sites/sites.tar
ix. Make sure that the copied files have proper
ownership:
x. chown -R apigee:apigee /opt/apigee/apigee-
drupal/wwwroot/sites/
xi. Restore the settings.php file:
sudo cp
/opt/apigee/apigee-drupal/wwwroot/sites/default/settings.bak.php
xii.
/opt/apigee/apigee-drupal/wwwroot/sites/defa
ult/settings.php
xiii.Move private files to new location:
cp -r
/opt/apigee/apigee-drupal/wwwroot/sites/default/files/private/*
/opt/apigee/data/apigee-drupal-devportal/private
rm -rf /opt/apigee/apigee-drupal/wwwroot/sites/default/files/private
cd /var/www/html
$databases = array (
'default' =>
array (
'default' =>
array (
18. );
19. Where database specifies the new database you created,
username and password are as defined for the custom
database on the tar-based portal, and prefix is empty.
20. On the RPM-based portal, the RPM-based version of the
portal contains fewer Drupal modules than the tar-based
version. After you migrate to the RPM-based portal, you must
check for any missing modules and install them as necessary.
a. Install the Drupal missing_module used to detect
missing modules:
cd /opt/apigee/apigee-drupal/wwwroot
drush dl missing_module
b. drush en missing_module
c. Log in to the RPM-based portal as an admin.
d. Select Reports > Status reports in the Drupal menu and
check for any missing modules.
e. Use that report to install any missing modules, or use
the following commands:
cd /opt/apigee/apigee-drupal/wwwroot
Note that the RPM-based version of the portal uses port 8079
by default, while the tar-based version uses port 80. Make
sure you use the correct port number in your DNS entry. See
Set the HTTP port used by the portal for information on using
a different port.
When the contents of the file settings.php for a new RPM-based portal are
different from those for a tar-based portal, you need to update the file
/opt/apigee/apigee-drupal-devportal-opdk_version/source/conf/settings.php
with the new contents to prevent it from being restored to the old settings.
ANEXOS
How-to guides
The how-to guides provide detailed instructions to help Apigee Edge
Public Cloud and Private Cloud users accomplish the following
tasks:
HTTP properties
JVM parameters
Log rotation
SNI
TLS certificate
Virtual host
Operations
Note: This document is applicable for Edge Private Cloud users only.
Apigee Edge expects the backend server to send 405 Method Not
Allowed responses with the list of allowed methods in the Allow
header, as per specification RFC 7231, section 6.5.5: 405 Method Not
Allowed.
Allow: HTTP_METHODS
For example, if your backend server allows GET, POST and HEAD
methods, you need to ensure that the Allow header contains them
as follows:
If the backend server does not send the Allow header with the HTTP
status code 405 Method Not Allowed, then Apigee returns
HTTP status code 502 Bad Gateway with error code
protocol.http.Response405WithoutAllowHeader to the
client application. The recommended solution to address this error is
to fix the backend server to adhere to the specification RFC 7231,
section 6.5.5: 405 Method Not Allowed or use the fault handling to
respond back with HTTP status code 405 Method Not Allowed
including the Allow header as explained in the troubleshooting
playbook 502 Bad Gateway - Response 405 without Allow header.
In such cases, you can set the ignore allow header for the 405
property HTTP.ignore.allow_header.for.405 at the Message
Processor level temporarily. Setting this property to true prevents
Apigee from returning the 502 Bad Gateway response to client
applications even if the backend server sends HTTP status code 405
Method Not Allowed without the Allow header.
Once you are in a position to fix your backend server to send HTTP
status code 405 Method Not Allowed with the Allow header,
you can revert the property
HTTP.ignore.allow_header.for.405 to its default value
false.
9.
10. Restart the Message Processor as shown below:
/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart
11.
12. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.
2.
3. If the property is successfully updated on the Message
Processor, then the above command should show the value of
the property HTTP.ignore.allow_header.for.405 as
true in the http.properties file as shown below:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTP.ignore.allow_header.for.405=true
5. If you still see the value for the property
HTTP.ignore.allow_header.for.405 as false then
verify that you have followed all the steps outlined in
Configuring ignore allow header for 405 property to true in
Message Processors correctly. If you have missed any step,
repeat all the steps again correctly.
6. If you are still not able to modify the property
HTTP.ignore.allow_header.for.405, then contact
Apigee Edge Support.
2.
3. If the property is set to true on the Message Processor, then
the above command should show the value of the property
HTTP.ignore.allow_header.for.405 as true in the
http.properties file as shown below:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTP.ignore.allow_header.for.405=true
5. If the above command shows that the property
HTTP.ignore.allow_header.for.405 is set to false
(default value), then you don’t have to do anything else. That
is, skip the following steps.
6. If the property HTTP.ignore.allow_header.for.405 is
set to true, then perform the following steps to revert to its
default value false.
7. On the Message Processor machine, open the following file in
an editor:
8. /opt/apigee/customer/application/message-
processor.properties
9. For example, to open the file using vi, enter the following:
vi /opt/apigee/customer/application/message-processor.properties
10.
11. Remove the following line from the properties file:
12. conf_http_HTTP.ignore.allow_header.for.405=true
13. Save your changes.
14. Ensure the properties file is owned by the apigee user as
shown below:
15.
16. Restart the Message Processor as shown below:
/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart
17.
18. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.
2.
3. If the property is successfully updated on the Message
Processor, then the above command should show the value of
the property HTTP.ignore.allow_header.for.405 as
false in the http.properties file as shown below:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTP.ignore.allow_header.for.405=false
5. If you still see the value for the property
HTTP.ignore.allow_header.for.405 as true, then
verify that you have followed all the steps outlined in
Configuring ignore allow header for 405 property to false on
Message Processors correctly. If you have missed any step,
repeat all the steps again correctly.
6. If you are still not able to modify the property
HTTP.ignore.allow_header.for.405, then contact
Apigee Edge Support.
Configuring Message
Processors to allow duplicate
headers
Note: This document is applicable for Edge Private Cloud users only.
As per the HTTP Specification RFC 7230, section 3.2.2: Field Order,
Apigee Edge expects that the HTTP request from the client or HTTP
response from the backend server do not contain the same header
being passed more than once with the same or different values,
unless the specific header has an exception and is allowed to have
duplicates.
Configuration overwritten
● HTTPHeader.HEADER_NAME=multivalued,
allowDuplicate
This configuration does not change the default behaviour.
That is, the specific header is allowed to have duplicates and
multiple values
.
● HTTPHeader.HEADER_NAME=
This configuration changes the default behaviour. That is, the
specific header is not allowed to have duplicates and multiple
values.
Note: There are some headers which are already overwritten using one of the
above methods. Such headers are referred to as headers with pre-existing
config hereafter in this document.
The following table describes the behaviour of Apigee Edge when the
headers are sent as duplicates and with multiple values depending
on how the HTTPHeader properties are configured on the Message
Processors with an example HTTPHeader of test-header.
vi /opt/apigee/customer/application/message-processor.properties
13.
14. Add a line in the following format:
15. conf_http_HTTPHeader.Expires=allowDuplicates, multiValued
16. Save your changes.
17. Ensure that the properties file is owned by the apigee user. If
it is not, execute the following command:
/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart
20.
21. To restart without traffic impact, see Rolling restart of
Message Processors without traffic impact.
22. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.
vi /opt/apigee/customer/application/message-processor.properties
16.
17. Add a line in the following format to the properties file:
18. PRE-EXISTING CONFIG
19. NO PRE-EXISTING CONFIG
20. Scenario #1: Header with pre-existing config:
21. conf_http_HTTPHeader.Expires=
22. Save your changes.
23. Ensure that the properties file is owned by the apigee user. If
it is not, execute the following:
/opt/apigee/apigee-service/bin/apigee-service edge-message-
processor restart
26.
27. To restart without traffic impact, see Rolling restart of
Message Processors without traffic impact.
28. If you have more than one Message Processor, repeat the
above steps on all the Message Processors.
Required permissions: To perform this task, you must have the following
permissions:
1. Open the
/opt/apigee/customer/application/message-
processor.properties file on the Message Processor
machine in an editor. If the file does not already exist, then
create it. For example:
2. vi /opt/apigee/customer/application/message-
processor.properties
3. Add the following lines to this file:
4. bin_setenv_min_mem=minimum_heap_in_megabytes
5. bin_setenv_max_mem=maximum_heap_in_megabytes
6. For example, If you want to change minimum and maximum
heap on the Message Processor to 1 GB and 2 GB
respectively, then add the following lines to this file:
7. bin_setenv_min_mem=1024m
8. bin_setenv_max_mem=2048m
9. Save your changes.
10. Ensure this properties file is owned by the apigee user. For
example:
11. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
12. Restart the Message Processor using the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
14. If you have more than one Message Processor, repeat these
steps on all the Message Processors.
What’s next?
● Configuring heap memory size on the Qpid servers
● Enabling G1GC on the Message Processors
● Enabling String Deduplication on the Message Processors
Required permissions: To perform this task, you must have the following
permissions:
● Access to the specific Qpid servers where you would like to configure
the heap memory.
● Access to create and edit the Qpid server component properties file in
/opt/apigee/customer/application folder.
1. Open the
/opt/apigee/customer/application/qpid-server.p
roperties file on the Qpid server machine in an editor. If the
file does not already exist, then create it. For example:
2. vi /opt/apigee/customer/application/qpid-server.properties
3. Add the following lines to this file:
4. bin_setenv_min_mem=minimum_heap_in_megabytes
5. bin_setenv_max_mem=maximum_heap_in_megabytes
6. For example, If you want to change minimum and maximum
heap on the Qpid server to 1 GB and 2 GB respectively, then
add the following lines to this file:
7. bin_setenv_min_mem=1024m
8. bin_setenv_max_mem=2048m
9. Save your changes.
10. Ensure this properties file is owned by the apigee user. For
example:
11. chown apigee:apigee
/opt/apigee/customer/application/qpid-server.properties
12. Restart the Qpid server using the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-qpid-
server restart
14. If you have more than one Qpid server, repeat these steps on
all the Qpid servers.
What’s next?
Configuring heap memory size on Message Processors
Required permissions: To perform this task, you must have the following
permissions:
The following steps describe how to locate the token for useG1GC
property:
1. Open the
/opt/apigee/customer/application/message-
processor.properties file on the Message Processor
machine in an editor. If the file does not already exist, then
create it. For example:
2. vi /opt/apigee/customer/application/message-
processor.properties
3. Add the following line to this file:
4. conf_system_useG1GC=true
5. Save your changes.
6. Ensure this properties file is owned by the apigee user. For
example:
7. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
8. Restart the Message Processor using the following command:
9. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
10. If you have more than one Message Processor, repeat these
steps on all the Message Processors.
This section explains how to verify that the G1GC configuration has
been successfully modified on the Message Processors.
What’s next?
Required permissions: To perform this task, you must have the following
permissions:
1. Open the
/opt/apigee/customer/application/message-
processor.properties file on the Message Processor
machine in an editor. If the file does not already exist, then
create it. For example:
2. vi /opt/apigee/customer/application/message-
processor.properties
3. Add the following line to this file:
4. conf_system_useStringDeduplication=true
5. Note: If you would like to have both G1GC and String Deduplication
enabled together, then add the following lines to
/opt/apigee/customer/application/message-
processor.properties:
6. conf_system_useG1GC=true
7. conf_system_useStringDeduplication=true
8.
9. Save your changes.
10. Ensure this properties file is owned by the apigee user. For
example:
11. chown apigee:apigee
/opt/apigee/customer/application/message-
processor.properties
12. Restart the Message Processor using the following command:
13. /opt/apigee/apigee-service/bin/apigee-service edge-
message-processor restart
14. If you have more than one Message Processor, repeat these
steps on all the Message Processors.
What’s next?
Enable G1GC on the Message Processors
In Edge for Private Cloud, some of the main log files on each apigee
component are configured with a default rotation mechanism. For
example, on the Message Processor component, the following files
are configured with a default rotation mechanism using logback:
● /opt/apigee/var/log/edge-message-processor/
logs/system.log
● /opt/apigee/var/log/edge-message-processor/
logs/events.log
● /opt/apigee/var/log/edge-message-processor/
logs/startupruntimeerrors.log
● /opt/apigee/var/log/edge-message-processor/
logs/configurations.log
● /opt/apigee/var/log/edge-message-processor/
logs/transactions.log
/opt/apigee/var/log/edge
-message-processor/edge-
message-processor.log. Other
edge-* components generate a similar file. These files'
rotation are not done by the logback library, but rather using
● /opt/apigee/var/log/edge-message-processor/
logs/system.log
● /opt/apigee/var/log/edge-message-processor/
logs/events.log
● /opt/apigee/var/log/edge-message-processor/
logs/startupruntimeerrors.log
● /opt/apigee/var/log/edge-message-processor/
logs/configurations.log
● /opt/apigee/var/log/edge-message-processor/
logs/transactions.log
Required permissions: To perform this task, you must have the following
permissions:
The following steps describe how to enable log rotation for edge-
message-processor.log file:
1. Open the
/opt/apigee/edge-message-processor/logrotate/l
ogrotate.conf file on the Message Processor machine in
an editor. If the file does not exist, create it. For example:
vi /opt/apigee/edge-message-processor/logrotate/logrotate.conf
2.
3. Add a snippet to the file similar to the one shown below:
4. /opt/apigee/var/log/edge-message-processor/edge-
message-processor.log {
5. missingok
6. copytruncate
rotate 5
size 10M
compress
delaycompress
notifempty
nocreate
sharedscripts
}
7. Note: The sample configuration shown above rotates the file after 10
MB and log files are rotated five times before being removed. You can
modify the above configuration depending on your log rotation
requirements .
8. Save your changes.
9. Open the apigee user's crontab using the following
command:
In Edge for Private Cloud, some of the main log files on each of the
Apigee components are configured with a default rotation
mechanism.
● /opt/apigee/var/log/edge-router/logs/
system.log
● /opt/apigee/var/log/edge-router/logs/
events.log
● /opt/apigee/var/log/edge-router/logs/
startupruntimeerrors.log
● /opt/apigee/var/log/edge-router/logs/
configurations.log
● /opt/apigee/var/log/edge-router/logs/
transactions.log
However, there are certain log files in Apigee components that are
not configured with default rotation. On the Apigee component
Router edge-router.log file is one of those files for which log
rotation is not configured by default.
Required permissions:
● Access to the specific Routers, where you would like to rotate logs.
● Access to create the logrotate conf files in the Edge Router
component /opt/apigee/edge-router/logrotate/ folder.
● Access to create a cron job using crontab.
The following steps describe how to enable log rotation for edge-
router.log file.
1. Open the
/opt/apigee/edge-router/logrotate/logrotate.co
nf file on the Router machine in an editor. If the file does not
exist, create it. For example:
vi /opt/apigee/edge-router/logrotate/logrotate.conf
2.
3. Add a snippet to the file similar to the one shown below:
4. /opt/apigee/var/log/edge-router/edge-router.log {
5. missingok
6. copytruncate
rotate 5
size 10M
compress
delaycompress
notifempty
nocreate
sharedscripts
}
7. Note: The sample configuration shown above rotates the file after 10
MB and log files are rotated five times before being removed. You can
modify the above configuration depending on your log rotation
requirements.
8. Save your changes.
9. Open apigee user's crontab using the following command:
4. CONNECTED(00000003) 9362:error:14077410:SSL
routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake
failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/Op
enSSL098/OpenSSL098-64.50.6/src/ssl/s23_clnt.c:593
5. Execute the the openssl command and try to connect to the
relevant server hostname (Edge router or backend server) by
passing the server name as shown below:
6. openssl s_client -connect hostname:port -servername
hostname
7. If you get a handshake failure in step 1 or get different
certificates in step 1 and step 2, then it indicates that the
specified Server is SNI enabled.
8. If you want to verify this for more than one backend server,
then you need to repeat the above steps for each backend
server.
If you find that you have one or more backend servers that are SNI
enabled, then you need to enable SNI on the Message Processor
component as explained below. Otherwise, the API requests going
through Apigee Edge will fail with TLS Handshake Failures.
Note: When SNI is enabled on a client (Message Processor), the client passes
the hostname of the backend server as part of the initial TLS handshake for all
the API proxies.
The following steps describe how to locate the token for the
jsse.enableSNIExtension property:
Note: When SNI is disabled on a client (Message Processor), the client does
not pass the hostname of the backend server as part of the initial TLS
handshake for all the API Proxies.
Required permissions: To perform this task, you must have the following
permissions:
Example scenarios
The scenarios in this section can help you understand how to
correctly set the I/O timeout values.
This document explains how to configure the I/O timeout for the Apigee Edge
Message Processors.
The I/O timeout on the Message Processor represents the time for which the
Message Processor waits either to receive a response from the backend server or
for the socket to be ready to write a request to the backend server, before it times
out.
The Message Processor I/O timeout default value is 55 seconds. This timeout
period is applicable to the backend servers configured in the target endpoint
configuration and in the ServiceCallout policy of your API proxy.
The I/O timeout for Message Processors can be increased or decreased from the
default value of 55 seconds based on your needs. It can be configured in the
following places:
● In the API proxy
● Target endpoint
● ServiceCallout policy
● On the Message Processor
The following properties control the I/O timeout on the Message Processors:
io.timeout. API proxy: This is the maximum time for which the
millis Message Processor does the following:
● Target endpoint
● Service Callout ● Waits to receive a response from
policy the backend server, after
establishing the connection and
sending the request to the
backend server OR
● Waits for the socket to be ready
for the Message Processor to
send the request to the backend
server.
HTTPTranspo Message Processor This is the maximum time for which the
rt.io.timeo Message Processor does the following:
ut.millis ● Waits to receive a response from
the backend server, after
establishing the connection and
sending the request to the
backend server OR
● Waits for the socket to be ready
for the Message Processor to
send the request to the backend
server.
● If the timeout happens on the Message Processor while reading the HTTP response, then
504 Gateway Timeout is returned.
● If the timeout happens on the Message Processor while writing the HTTP request, then
408 Request Timeout is returned.
● If you aren’t familiar with I/O timeout, see the io.timeout.millis property
description in TargetEndpoint Transport Property Specification.
● If you aren’t familiar with configuring properties for Edge for Private Cloud,
read How to configure Edge.
● Ensure that you follow the recommendations in Best practices for configuring
I/O timeout.
Note:
● If you want to increase the I/O timeout value above the default value for a specific API
Proxy, then you need to increase it in the API proxy (as explained in Configuring I/O
timeout in API proxy). In addition, you need to increase the I/O timeout values on the
Router, client and backend server appropriately as documented in Best practices for
configuring I/O timeout.
● Increasing the I/O timeout value on the Message Processor only, or configuring
inappropriate I/O timeout values on Edge components can lead to 504 Gateway Timeout
errors.
If you want to decrease the I/O timeout value for a specific API Proxy, then you need to decrease
it in the API Proxy (as explained in Configuring I/O timeout in API proxy. If the timeout on the
other components (Routers, client and backend) are configured appropriately as per the
recommendations in Best practices for configuring I/O timeout, then you don't have to make any
changes in the other components. Otherwise, make the changes appropriately.
● Target endpoint
● ServiceCallout policy
This section explains how to configure I/O timeout in the target endpoint of your API
proxy. The I/O timeout can be configured through the property
io.timeout.millis, which represents the I/O timeout value in milliseconds.
Note: Configuring the I/O timeout in the target endpoint of your API proxy overrides the default
value (55 seconds) with the new value and affects only the specific API proxy where it is
configured. The rest of the API Proxies continue to use the default I/O timeout value of 55
seconds.Required permissions: To perform this task, you must have edit access permissions to
the specific API proxy in which you would like to configure the I/O timeout.
1. In the Edge UI, select the specific API proxy in which you would like to
configure the new I/O timeout value.
2. Select the specific target endpoint that you want to modify.
3. Add the property io.timeout.millis with an appropriate value under the
<HTTPTargetConnection> element in the TargetEndpoint
configuration.
4. For example, to change the I/O Timeout to 120 seconds, add the following
block of code:
5. <Properties>
6. <Property name="io.timeout.millis">120000</Property>
7. </Properties>
This section explains how to configure the I/O timeout in the ServiceCallout policy of
your API proxy. The I/O timeout can be configured through either the <Timeout>
element or the io.timeout.millis property. Both the <Timeout> element and
the io.timeout.millis property represent the I/O timeout values in milliseconds.
Note: Configuring the I/O timeout in the ServiceCallout policy of the API proxy overrides the
default value (55 seconds) with the new value and affects only the specific ServiceCallout
policy where it is configured. The rest of the ServiceCallout policies and API Proxies continue to
use the default I/O timeout value of 55 seconds.Required permissions: To perform this task,
you must have edit access permissions to the specific API proxy in which you would like to
configure the I/O timeout for the ServiceCallout policy.
You can configure the I/O timeout in the ServiceCallout policy using one of the
following methods:
● <Timeout> element.
● io.timeout.millis property.
TIMEOUT ELEMENT
IO.TIMEOUT.MILLIS PROPERTY
To configure the I/O timeout in the ServiceCallout policy using the <Timeout>
element, do the following:
1. In the Edge UI, select the specific API proxy in which you would like to
configure the new I/O timeout value for the ServiceCallout policy.
2. Select the specific ServiceCallout policy that you want to modify.
3. Add the element <Timeout> with an appropriate value under the
<ServiceCallout> configuration.
For example, to change the I/O timeout to 120 seconds, add the following line
of code:
4. <Timeout>120000</Timeout>
5. Since the <Timeout> element is in milliseconds, the value for 120 seconds is
120000.
The following example shows how to configure the I/O timeout in the
ServiceCallout policy using the <Timeout> element:
Example ServiceCallout policy configuration using URL for backend server
6. <ServiceCallout name="Service-Callout-1">
7. <DisplayName>ServiceCallout-1</DisplayName>
8. <Timeout>120000</Timeout>
<HTTPTargetConnection>
<Properties/>
<URL>https://mocktarget.apigee.net/json</URL>
</HTTPTargetConnection>
</ServiceCallout>
This section explains how to configure the I/O timeout on the Message Processors.
The I/O timeout can be configured through the property
HTTPTransport.io.timeout.millis, which represents the I/O timeout value in
milliseconds on the Message Processor component, using the token as per the
syntax described in How to configure Edge.
Note: If a change is made at the Message Processor level, it is important to note that it affects
the I/O timeout of all the API proxies associated with the organizations and environments served
by the specific Message Processors.Required permissions: To perform this task, you must have
the following permissions:
1. Access to the specific Message Processor, where you would like to configure the I/O
timeout.
2. Access to create or edit the Message Processor component properties file in the
/opt/apigee/customer/application folder.
This section explains how to verify that the I/O timeout has been successfully
modified on the Message Processors.
What’s next?
Learn about Configuring I/O timeout on Routers
This document explains how to configure the I/O timeout on Apigee Edge’s Routers.
The I/O timeout on the Router represents the time for which the Router waits to
receive a response from the Message Processor, after establishing the connection
and sending the request to the Message Processor. The default value of the I/O
timeout on the Router is 57 seconds.
The I/O timeout for Routers can be increased or decreased from the default value of
57 seconds based on your needs. It can be configured in the following ways:
● In a virtual host
● On the Router
Property
Location Description
Name
proxy_r Virtual Specifies the maximum time for which the Router waits to
ead_tim host receive a response from the Message Processor, after
establishing the connection and sending the request to the
eout Message Processor.
conf_lo Router Specifies the maximum time for which the Router waits to
ad_bala receive a response from the Message Processor, after
establishing the connection and sending the request to the
ncing_l Message Processor.
oad.bal
If there is no response from the Message Processor within
ancing.
this timeout period, then the Router times out.
driver.
proxy.r This property is used for all the virtual hosts on this Router.
ead.tim The default value of this property is 57 seconds.
eout
You can either modify this property as explained in
Configuring I/O timeout on Routers below, or you can
overwrite this value by setting the proxy_read_timeout
property at the virtual host level.
You can set the time interval for this property as something
other than seconds using the following notation:
ms: milliseconds
s: seconds (default)
m: minutes
h: hours
d: days
w: weeks
M: months (length of 30 days)
y: years (length of 365 days)
conf_lo Router Specifies the total time the Router waits to receive a
ad_bala response from all the Message Processors, after
establishing the connection and sending the request to
ncing_l each Message Processor.
oad.bal
This is applicable when your Edge installation has multiple
ancing.
Message Processors and retry is enabled upon occurrence
driver. of errors. It has the value of one of the following:
nginx.u
● The current value of
pstream
conf_load_balancing_load.balancing.dri
_next_t
ver.proxy.read.timeout
imeout ● The default value of 57 seconds
As with the
conf_load_balancing_load.balancing.driver.p
roxy.read.timeout property, you can specify time
intervals other than the default (seconds).
● If you aren’t familiar with virtual host Properties, read Virtual host property
reference.
● If you aren’t familiar with configuring properties for Edge on Private Cloud,
read How to configure Edge.
● Ensure that you follow the Best practices for configuring I/O timeout
recommendations.
Note: Configuring inappropriate I/O timeout values on Edge components can lead to 504
Gateway Timeout Errors.
Note: Configuring the I/O timeout in a virtual host overrides the default value (57 seconds) with
the new value and affects only the specific API Proxies where the virtual host is used. The rest of
the API Proxies continue to use the default I/O timeout value of 57 seconds.Required
permissions: To perform this task, you must have the following permissions:
You can configure the virtual host using one of the following methods:
● Edge UI
● Edge API
EDGE UI
EDGE API
Note: The virtual host cannot be edited using classic UI. If you don't have access to the new UI,
use the Edge API. For more information on the transition to the new UI, see Transition from
Classic Edge UI.
To configure the virtual host using the Edge UI, do the following:
Note: Use this procedure to verify that the I/O timeout has been successfully modified on the
virtual host whether you used the Edge UI or the Edge API to modify the I/O timeout value.
1. Execute the Get virtual host API to get the virtualhost configuration as
shown below:
Public Cloud user
2. curl -v -X GET
https://api.enterprise.apigee.com/v1/organizations/{organization-name}/
environments/{environment-name}/virtualhosts/{virtualhost-name} -u
<username>
3.
4. Private Cloud user
5. curl -v -X GET http://<management server-host>:<port
#>/v1/organizations/{organization-name}/environments/{environment-
name}/virtualhosts/{virtualhost-name} -u <username>
6.
7. Where:
8. {organization-name} is the name of the organization
9. {environment-name} is the name of the environment
10. {virtualhost-name} is the name of the virtual host
11. Verify that the property proxy_read_timeout has been set to the new
value.
Sample updated virtual host configuration
12. {
13. "hostAliases": [
14. "api.myCompany,com",
],
"interfaces": [],
"listenOptions": [],
"name": "secure",
"port": "443",
"retryOptions": [],
"properties": {
"property": [
{
"name": "proxy_read_timeout",
"value": "120"
}
]
},
"sSLInfo": {
"ciphers": [],
"clientAuthEnabled": "false",
"enabled": "true",
"ignoreValidationErrors": false,
"keyAlias": "myCompanyKeyAlias",
"keyStore": "ref://myCompanyKeystoreref",
"protocols": []
},
"useBuiltInFreeTrialCert": false
}
15. In the example above, note that the proxy_read_timeout has been set with
the new value of 120 seconds.
16. If you still see the old value for proxy_read_timeout,then verify that you
have followed all the steps outlined in Configuring I/O timeout in virtual host
correctly. If you have missed any step, repeat all the steps again correctly.
17. If you are still not able to modify the I/O timeout, then contact Apigee Edge
Support.
This section explains how to configure I/O timeout on the Routers. The I/O timeout
can be configured through the Router property
conf_load_balancing_load.balancing.driver.proxy.read.timeout,
which represents the I/O timeout value in seconds.
Note: Since this change is made at the Router level, it is important to note that it affects the I/O
timeout of all the API proxies associated with the organizations served by the specific
Routers.Required permissions: To perform this task, you must have the following permissions:
● Access to the specific Router machine, where you would like to configure the I/O timeout.
● Access to create/edit the Router component properties file in the
/opt/apigee/customer/application folder.
1. On the Router machine, open the following file in an editor. If it does not
already exist, then create it.
2. /opt/apigee/customer/application/router.properties
3. For example, to open the file with vi, enter the following command:
4. vi /opt/apigee/customer/application/router.properties
5. Add a line in the following format to the properties file, substituting a value
for time_in_seconds:
6. conf_load_balancing_load.balancing.driver.proxy.read.timeout=time_in_secon
ds
7. For example, to change the I/O timeout on the Router to 120 seconds, add the
following line:
8. conf_load_balancing_load.balancing.driver.proxy.read.timeout=120
9. You can also modify the I/O timeout in minutes. For example, to change the
timeout to two minutes, add the following line:
10. conf_load_balancing_load.balancing.driver.proxy.read.timeout=2m
11. Note:
12. The property
conf_load_balancing_load.balancing.driver.proxy.read.timeout
automatically takes the new value set for the property
conf_load_balancing_load.balancing.driver.proxy.read.timeout.
13. However, if you want to set a new value for the property
conf_load_balancing_load.balancing.driver.proxy.read.timeout, then
you need to add the following line to the
/opt/apigee/customer/application/router.properties file:
14. conf_load_balancing_load.balancing.driver.nginx.upstream_next_timeout=122
15. And then, follow the steps outlined below.
16. Save your changes.
17. Ensure this properties file is owned by the apigee user as shown below:
18. chown apigee:apigee /opt/apigee/customer/application/router.properties
19. Restart the Router as shown below:
20. /opt/apigee/apigee-service/bin/apigee-service edge-router restart
21. If you have more than one Router, repeat the above steps on all the Routers.
This section explains how to verify that the I/O timeout has been successfully
modified on the Routers.
What’s next?
Learn about Configuring I/O Timeout in Message Processor
This document explains how to configure the connection timeout for the Apigee
Edge Message Processors.
The connection timeout represents time for which the Message Processor waits to
establish connection with the target server. The default value of the connection
timeout property on the Message Processor is 3 seconds. This timeout period is
applicable to the backend servers configured in the target endpoint configuration
and in the ServiceCallout policy of your API proxy.
1. The default value of connection timeout property (3 seconds) has been found to be
largely adequate for most backend servers. You should increase the value of this
property only if you have deemed it necessary after performing all the possible
checks/optimizations on the network and your backend server.
2. Ensure that you choose an appropriate value for the connection timeout property based
on your needs. Don’t set arbitrarily high values for this property.
The following properties control the connection timeout on the Message Processors:
Property
Location Description
name
● Target endpoint
● ServiceCallout policy
This section explains how to configure connection timeout in the target endpoint of
your API proxy. The connection timeout can be configured through the property
connect.timeout.millis, which represents the connection timeout value in
milliseconds.
Note: Configuring the connect time in the target endpoint of your API proxy overrides the default
value (3 seconds) with the new value and affects only the specific API proxy where it is
configured. The rest of the API Proxies continue to use the default connect timeout value of 3
seconds.Required permissions: To perform this task, you must have edit access permissions to
the specific API proxy in which you would like to configure the connection timeout.
1. In the Edge UI, select the specific API proxy in which you would like to
configure the new connection timeout value.
2. Select the specific target endpoint that you want to modify.
3. Add the property connect.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example, to change the connection timeout to 5 seconds, add the
following block of code:
4. <Properties>
5. <Property name="connect.timeout.millis">5000</Property>
6. </Properties>
Note: Configuring the connection timeout in the ServiceCallout policy of the API proxy
overrides the default value (3 seconds) with the new value and affects only the specific
ServiceCallout policy where it is configured. The rest of the ServiceCallout policies and
API Proxies continue to use the default connect timeout value of 3 seconds.Required
permissions: To perform this task, you must have edit access permissions to the specific API
proxy in which you would like to configure the connection timeout for the ServiceCallout
policy.
1. In the Edge UI, select the specific API proxy in which you would like to
configure the new connection timeout value for the ServiceCallout policy.
2. Select the specific ServiceCallout policy that you want to modify.
3. Add the property connect.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example to change the connection timeout to 5 seconds, add the
following block of code:
4. <Properties>
5. <Property name="connect.timeout.millis">5000</Property>
6. </Properties>
Note: If a change is made at the Message Processor level, it is important to note that it affects
the connection timeout of all the API proxies associated with the organizations and
environments served by the specific Message Processors.Required permissions: To perform
this task, you must have the following permissions:
● Access to the specific Message Processor, where you would like to configure the
connect timeout.
● Access to create or edit the Message Processor component properties file in the
/opt/apigee/customer/application folder.
4.
5. Add a line in the following format to the properties file, substituting a value for
TIME_IN_MILLISECONDS:
6. conf_http_HTTPClient.connect.timeout.millis=TIME_IN_MILLISECONDS
7. For example, to change the connection timeout on the Message Processor to
5 seconds, add the following line:
8. conf_http_HTTPClient.connect.timeout.millis=5000
9. Save your changes.
10. Ensure the properties file is owned by the apigee user as shown below:
chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
11.
12. Restart the Message Processor as shown below:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart
13.
14. If you have more than one Message Processor, repeat the above steps on all
the Message Processors.
2.
3. If the new connection timeout value is successfully set on the Message
Processor, then the above command shows the new value in the
http.properties file.
The sample result from the above command after you have configured
connection timeout to 5 seconds is as follows:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPClient.connect.timeout.millis=5000
5. In the example output above, notice that the property
HTTPClient.connect.timeout.millis has been set with the new value
5000 in http.properties. This indicates that the connection timeout is
successfully configured to 5 seconds on the Message Processor.
6. If you still see the old value for the property
HTTPClient.connect.timeout.millis, then verify that you have
followed all the steps outlined in Configuring connection timeout on Message
Processors correctly. If you have missed any step, repeat all the steps again
correctly.
7. If you are still not able to modify the connection timeout, then contact Google
Cloud Apigee Edge Support.
This document explains how to configure the keep alive timeout for the Apigee Edge
Message Processors.
The keep alive timeout on the Message Processor allows a single TCP connection to
send and receive multiple HTTP requests/responses from/to the backend server,
instead of opening a new connection for every request/response pair.
The default value of the keep alive timeout property on the Message Processor is 60
seconds. This timeout period is applicable to the backend servers configured in the
target endpoint configuration and in the ServiceCallout policy of your API proxy.
The keep alive timeout for Message Processors can be increased or decreased from
the default value of 60 seconds based on your needs. It can be configured in the
following ways:
The following properties control the keep alive timeout on the Message Processors:
Property
Location Description
name
keepalive API proxy: This is the maximum idle time for which the
.timeout. ● Target endpoint Message Processor allows a single TCP
● ServiceCall connection to send and receive multiple
millis HTTP requests/responses, instead of
out policy
opening a new connection for every
request/response pair.
HTTPClien Message Processor This is the maximum idle time for which the
t.keepali Message Processor allows a single TCP
connection to send and receive multiple
ve.timeou HTTP requests/responses, instead of
t.millis opening a new connection for every
request/response pair.
● Target endpoint
● ServiceCallout policy
This section explains how to configure keep alive timeout in the target endpoint of
your API proxy. The keep alive timeout can be configured through the property
keepalive.timeout.millis, which represents the keep alive timeout value in
milliseconds.
Note: Configuring the keep alive timeout in the target endpoint of your API proxy overrides the
default value (60 seconds) with the new value and affects only the specific API proxy where it is
configured. The rest of the API Proxies continue to use the default keep alive timeout value of 60
seconds.Required permissions: To perform this task, you must have edit access permissions to
the specific API proxy in which you would like to configure the keep alive timeout.
1. In the Edge UI, select the specific API proxy in which you would like to
configure the new keep alive timeout value.
2. Select the specific target endpoint that you want to modify.
3. Add the property keepalive.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example, to change the keep alive Timeout to 30 seconds, add the
following block of code:
4. <Properties>
5. <Property name="keepalive.timeout.millis">30000</Property>
6. </Properties>
This section explains how to configure the keep alive timeout in the
ServiceCallout policy of your API proxy. The keep alive timeout can be
configured through the keepalive.timeout.millis property, which
represents the keep alive timeout value in milliseconds.
Note: Configuring the keep alive timeout in the ServiceCallout policy of the API proxy
overrides the default value (60 seconds) with the new value and affects only the specific
ServiceCallout policy where it is configured. The rest of the ServiceCallout policies and
API Proxies continue to use the default keep alive timeout value of 60 seconds.Required
permissions: To perform this task, you must have edit access permissions to the specific API
proxy in which you would like to configure the keep alive timeout for the ServiceCallout
policy.
To configure the keep alive timeout in the ServiceCallout policy using the
keepalive.timeout.millis property:
1. In the Edge UI, select the specific API proxy in which you would like to
configure the new keep alive timeout value for the ServiceCallout policy.
2. Select the specific ServiceCallout policy that you want to modify.
3. Add the property keepalive.timeout.millis with an appropriate value
under the <HTTPTargetConnection> element in the TargetEndpoint
configuration.
For example to change the keep alive timeout to 30 seconds, add the
following block of code:
4. <Properties>
5. <Property name="keepalive.timeout.millis">30000</Property>
6. </Properties>
● Access to the specific Message Processor, where you would like to configure the keep
alive timeout.
● Access to create or edit the Message Processor component properties file in the
/opt/apigee/customer/application folder.
To configure the keep alive timeout on the Message Processors, do the following:
4.
5. Add a line in the following format to the properties file, substituting a value for
TIME_IN_MILLISECONDS:
6. conf/
http.properties+HTTPClient.keepalive.timeout.millis=TIME_IN_MILLISECONDS
7. For example, to change the keep alive timeout on the Message Processor to
30 seconds, add the following line:
8. conf/http.properties+HTTPClient.keepalive.timeout.millis=30000
9. Save your changes.
10. Ensure the properties file is owned by the apigee user as shown below:
chown apigee:apigee /opt/apigee/customer/application/message-
processor.properties
11.
12. Restart the Message Processor as shown below:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart
13.
14. If you have more than one Message Processor, repeat the above steps on all
the Message Processors.
2.
3. If the new keep alive timeout value is successfully set on the Message
Processor, then the above command shows the new value in the
http.properties file.
The sample result from the above command after you have configured keep
alive timeout to 30 seconds is as follows:
4. /opt/apigee/edge-message-processor/conf/
http.properties:HTTPClient.keepalive.timeout.millis=30000
5. In the example output above, notice that the property
HTTPClient.keepalive.timeout.millis has been set with the new
value 30000 in http.properties. This indicates that the keep alive timeout
is successfully configured to 30 seconds on the Message Processor.
6. If you still see the old value for the property
HTTPClient.keepalive.timeout.millis, then verify that you have
followed all the steps outlined in Configuring keep alive timeout on Message
Processors correctly. If you have missed any step, repeat all the steps again
correctly.
7. If you are still not able to modify the keep alive timeout, then contact Google
Cloud Apigee Edge Support.
This document explains how to convert the TLS certificate and associated Private
key to the PEM or the PFX (PKCS #12) formats.
Apigee Edge supports storing only PEM or PFX format certificates in keystores and
truststores. The steps used to convert certificates from any existing format to PEM
or PFX formats rely on the OpenSSL toolkit, and are applicable on any environment
where OpenSSL is available.
Before you use the steps in this document, be sure you understand the following
topics:
● If you aren’t familiar with the PEM or PFX format, read About TLS/SSL.
● If you aren’t familiar with certificate formats, read SSL Certificate formats.
● If you aren’t familiar with the OpenSSL library, read OpenSSL.
● If you want to use the command-line examples in this guide, install or update
to the latest version of the OpenSSL client.
This section describes how to convert a certificate and associated private key from
the DER format to the PEM format.
This section describes how to convert certificates from the P7B format to the PEM
format.
Note: A P7B file contains only certificates and chain certificates (Intermediate CAs), not the
private key.
This section describes how to convert TLS certificates from the PFX format to the
PEM format.
When converting a PFX file to PEM format, OpenSSL puts all the certificates and the
private key into a single file. You will need to open the file in a text editor and copy
each certificate and private key (including the BEGIN/END statements) to individual
text files and save them as certificate.pfx, Intermediate.pfx (if
applicable), CACert.pfx, and privateKey.key respectively.
Apigee does support the PFX/PKCS #12 format; however, the PEM format is
convenient for many reasons including validation.
This section describes how to convert TLS certificates from the P7B format to the
PFX format.
To convert to the PFX format, you need to get the private key as well.
Note: A P7B file contains all the certificates and chain certificates (Intermediate CAs) in a single
file.
This section describes how to verify that the certificate is in PEM format.
1. To view the certificate that is in PEM format, run the following command:
2. openssl x509 -in certificate.pem -text -noout
3. If you are able to view the contents of the certificate in a human-readable
format without any errors, then you can confirm that the certificate is in PEM
format.
4. If the certificate is in any other format, then you will see errors like the
following:
5. unable to load certificate
6. 12626:error:0906D06C:PEM routines:PEM_read_bio:no start
line:pem_lib.c:647:Expecting: TRUSTED CERTIFICATE View DER encoded
Certificate
This document explains how to validate a certificate chain before you upload the
certificate to a keystore or a truststore in Apigee Edge. The process relies on the
OpenSSL toolkit to validate the certificate chain and is applicable on any
environment where OpenSSL is available.
Before validating the certificate, you need to split the certificate chain into separate
certificates using the following steps:
This section describes how to get the subject and issuer of the certificates and verify
that you have a valid certificate chain.
1. Run the following OpenSSL command to get the Subject and Issuer for
each certificate in the chain from entity to root and verify that they form a
proper certificate chain:
2. openssl x509 -text -in certificate | grep -E '(Subject|Issuer):'
3.
4. Where certificate is the name of the certificate.
5. Verify that the certificates in the chain adhere to the following guidelines:
● Subject of each certificate matches the Issuer of the preceding
certificate in the chain (except for the Entity certificate).
● Subject and Issuer are the same for the root certificate.
6. If the certificates in the chain adhere to these guidelines, then the certificate
chain is considered to be complete and valid.
Sample certificate chain validation
The following example is the output of the OpenSSL commands for a sample
certificate chain containing three certificates:
Entity certificate
7. openssl x509 -text -in entity.pem | grep -E '(Subject|Issuer):'
8.
9. Issuer: C = US, O = Google Trust Services, CN = GTS CA 1O1
Subject: C = US, ST = California, L = Mountain View, O = Google LLC, CN =
*.enterprise.apigee.com
This section explains how to get the hash of the subject and issuer of the certificates
and verify that you have a valid certificate chain.
It is always a good practice to verify the hash sequence of certificates as it can help
in identifying issues such as the Common Name (CN) of the certificate having
unwanted space or special characters.
1. Run the following OpenSSL command to get the hash sequence for each
certificate in the chain from entity to root and verify that they form a
proper certificate chain.
2. openssl x509 -hash -issuer_hash -noout -in certificate
3.
4. Where certificate is the name of the certificate.
5. Verify that the certificates in the chain adhere to the following guidelines:
● Subject of each certificate matches the Issuer of the preceding
certificate in the chain (except for the Entity certificate).
● Subject and Issuer are the same for the root certificate.
6. If the certificates in the chain adhere to these guidelines, then the certificate
chain is considered to be complete and valid.
7. Sample certificate chain validation through hash sequence
8. The following example is the output of the OpenSSL commands for a sample
certificate chain containing three certificates:
9. openssl x509 -in entity.pem -hash -issuer_hash -noout
10. c54c66ba #this is subject hash
11. 99bdd351 #this is issuer hash
12.
13. openssl x509 -in intermediate.pem -hash -issuer_hash -noout
14. 99bdd351
15. 4a6481c9
16.
17. openssl x509 -in root.pem -hash -issuer_hash -noout
18. 4a6481c9
19. 4a6481c9
20.
21. In the example shown above, notice the following:
● The subject hash of the intermediate certificate matches the
issuer hash of the entity certificate.
● The subject hash of the root certificate matches the issuer hash
of the issuer certificate.
● The subject and issuer hash are the same in the root certificate.
22. From the example above, you can confirm that the sample certificate chain is
valid.
This section explains how to verify whether or not all the certificates in the chain are
expired using of the following methods:
Run the following OpenSSL command to get the start and end date for each
certificate in the chain from entity to root and verify that all the certificates in the
chain are in force (start date is before today) and are not expired.
This document explains how to validate a certificate’s purpose before you upload the
certificate to a keystore or a truststore. The process relies on OpenSSL for validation
and is applicable on any environment where OpenSSL is available.
The TLS certificates are generally issued with one or more purposes for which they
can be used. Typically this is done to restrict the number of operations for which a
public key contained in the certificate can be used. The purpose of the certificate is
available in the following certificate extensions:
● Key usage
● Extended key usage
Key usage
The key usage extension defines the purpose (for example, encipherment, signature,
or certificate signing) of the key contained in the certificate. If the public key is used
for entity authentication, then the certificate extension should have the key usage
Digital signature.
The different key usage extensions available for a TLS certificate created using the
Certificate Authority (CA) process are as follows:
● Digital signature
● Non-repudiation
● Key encipherment
● Data encipherment
● Key agreement
● Certificate signing
● CRL signing
● Encipher only
● Decipher only
For more information on these key usage extensions, see RFC5280, Key Usage.
This extension indicates one or more purposes for which the certified public key may
be used, in addition to or in place of the basic purposes indicated in the key usage
extension. In general, this extension will appear only in end entity certificates.
Some common extended key usage extensions are as follows:
● If the extension is critical, the certificate must be used only for the indicated
purpose or purposes. If the certificate is used for another purpose, it is in
violation of the CA's policy.
● If the extension is non-critical, it indicates the intended purpose or purposes of
the key is informational and does not imply that the CA restricts use of the key
to the purpose indicated. However, applications that use certificates may
require that a particular purpose be indicated in order for the certificate to be
acceptable.
If a certificate contains both the key usage field and the extended key usage field as
critical then both fields must be processed independently, and the certificate can be
used only for a purpose that satisfies both key usage values. However, if there is no
purpose that can satisfy both key usage values, then that certificate must not be
used for any purpose.
When you procure a certificate, ensure that it has the proper key usage defined to
satisfy the requirements for client or server certificates without which the TLS
handshake would fail.
Recommended key usage and extended key usages for certificates used in
Apigee Edge
Purpose Key usage Extended key
usage
(mandatory)
(optional)
This document explains how to verify that the correct client certificates have been
uploaded to Apigee Edge Routers. The process of validating certificates relies on
OpenSSL, which is the underlying mechanism used by NGINX on Apigee Edge
Routers.
Any mismatch in the certificates sent by the client applications as part of the API
request and the certificates stored on Apigee Edge Routers will lead to 400 Bad
Request - SSL Certificate errors. Validating the certificates using the process
described in this document can help you to proactively detect these issues and
prevent any certificate errors at runtime.
Note: This is applicable only if you have configured two-way mutual TLS between the client
application and Apigee Edge.
Note: This document is applicable for Edge Public and Private Cloud users.
This document explains how to configure cipher suites on virtual hosts and Routers
in Apigee Edge.
A cipher suite is a set of algorithms that help secure a network connection that
uses TLS. The client and the server must agree on the specific cipher suite that is
going to be used in exchanging messages. If the client and server do not agree on a
cipher suite, then the requests fail with TLS Handshake failures.
In Apigee, cipher suites should be mutually agreed upon between the client
applications and Routers.
You may want to modify the cipher suites on Apigee Edge for the following reasons:
● To avoid any cipher suites mismatch between client applications and Apigee
Routers
● To use more secure cipher suites either to fix any security vulnerabilities or
for enhanced security
Cipher suites can be configured either on the virtual hosts or Apigee Routers. Note
that Apigee accepts cipher suites only in the OpenSSL cipher strings format at both
the virtual host and the Router. The OpenSSL ciphers manpage provides the SSL or
TLS cipher suites from the relevant specification and their OpenSSL equivalents.
For example:
Required permissions: To perform this task, you must have one of the following permissions:
You can configure the virtual host using one of the following methods:
USING EDGE UI
USING EDGE API
Note: The virtual host cannot be edited using classic UI. If you don't have access to the new UI,
use the Edge API. For more information on the differences, see Transition from Classic Edge
UI.
To configure the virtual host using the Edge UI, do the following:
OpenSSL cipher
Cipher Suite
string
TLS_DHE_RSA_WITH_AES_ DHE-RSA-AES128-
128_GCM_SHA256 GCM-SHA256
TLS_ECDHE_RSA_WITH_AE ECDHE-RSA-
S_128_GCM_SHA256 AES128-GCM-
SHA256
6.
Add the OpenSSL cipher string with colon separation as shown in the
following figure:
Note: Use this procedure to verify that the cipher suites have been successfully modified on the
virtual host whether you used the Edge UI or the Edge API to configure the cipher suites.
1. Execute the Get virtual host API to get the virtualhost configuration as
shown below:
Public Cloud user:
2. curl -v -X GET
https://api.enterprise.apigee.com/v1/organizations/{organization-name}/
environments/{environment-name}/virtualhosts/{virtualhost-name} -u
{username}
3. Private Cloud user:
4. curl -v -X GET
http://{management_server_IP}:8080/v1/organizations/{organization-
name}/environments/{environment-name}/virtualhosts/{virtualhost-name} -u
{username}
5. Verify that the property ssl_ciphers has been set to the new value.
Sample updated virtual host configuration:
6. {
7. "hostAliases": [
8. "api.myCompany,com",
],
"interfaces": [],
"listenOptions": [],
"name": "secure",
"port": "443",
"retryOptions": [],
"properties": {
"property": [
{
"name": "ssl_ciphers",
"value": "DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256"
}
]
},
"sSLInfo": {
"ciphers": [],
"clientAuthEnabled": "false",
"enabled": "true",
"ignoreValidationErrors": false,
"keyAlias": "myCompanyKeyAlias",
"keyStore": "ref://myCompanyKeystoreref",
"protocols": []
},
"useBuiltInFreeTrialCert": false
}
9. In the example above, note that ssl_ciphers has been set with the new
value.
10.If you still see the old value for ssl_ciphers, then verify that you have
followed all the steps outlined in Configuring cipher suites on virtual hosts
correctly. If you have missed any step, repeat all the steps again correctly.
11.If you are still not able to update or add cipher suites to virtual host, then
contact Apigee Edge Support.
This section explains how to configure cipher suites on the Routers. Cipher suites
can be configured through the Router property
conf_load_balancing_load.balancing.driver.server.ssl.ciphers,
which represents the colon-separated accepted cipher suites.
Note:
1. Since this change is made at the Router level, it is important to note that it affects all
the virtual hosts associated with the organizations served by the specific Routers.
2. If you specify one or more cipher suite names or invalid ciphers, along with at least one
valid OpenSSL cipher string, then Apigee Edge will consider only the valid OpenSSL
cipher strings and ignore the rest. You will not see any errors/issues in this case.
3. Only the supported OpenSSL Cipher strings corresponding to the protocol version will
be accepted while the rest of them will be ignored. For example: If TLSv1.2 protocol is
used, then the old ciphers that don't support TLSv1.2 will be ignored.
4. Ensure you are using all the valid OpenSSL cipher strings. If all the ciphers in the list are
invalid, then this change may cause all the secure virtual hosts to go down and may
result in runtime impact.
Required permissions: To perform this task, you must have the following permissions:
● Access to the specific Router machine where you would like to configure the Cipher
suites
● Access to create and edit the Router component properties file in the
/opt/apigee/customer/application folder
1. On the Router machine, open the following file in an editor. If it does not
already exist, then create it.
2. /opt/apigee/customer/application/router.properties
3. For example, to open the file with vi, enter the following:
4. vi /opt/apigee/customer/application/router.properties
5. Add a line in the following format to the properties file, substituting a
value for colon_separated_cipher_suites:
6. conf_load_balancing_load.balancing.driver.server.ssl.ciphers=colon_separat
ed_cipher_suites
7. For example, if you want to allow only the cipher suites
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 and
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, then determine the
corresponding OpenSSL cipher strings from the OpenSSL ciphers manpage
as shown in the following table:
Cipher Suite OpenSSL cipher string
TLS_DHE_RSA_WITH_AES_128 DHE-RSA-AES128-
_GCM_SHA256 GCM-SHA256
TLS_ECDHE_RSA_WITH_AES_1 ECDHE-RSA-AES128-
28_GCM_SHA256 GCM-SHA256
This section explains how to verify that the cipher suites have been successfully
modified on the Routers.
Note: The instructions in this document are applicable for Edge Private Cloud users only.
This document explains how to restart Routers and Message Processors (MPs)
without impacting incoming API traffic. You may have to restart the Routers and
MPs under certain circumstances. Some examples are as follows:
● When a keystore that is directly referred to in the virtual host, target server
or target endpoint is updated without using references.
● When API proxies are partially deployed on a few MPs.
Required permissions: To perform this task, you must have permission to access the specific
Router machines where you would like to restart the Router process.
Required permissions: To perform this task, you must have permission to access the specific
Message Processor machines where you would like to restart the Message Processor process.
Where port # is the port number returned from the command performed in step 2.
● Red Hat Enterprise Linux (RHEL): RHEL 6 to RHEL 7 and RHEL 7 to RHEL 8
● Amazon Web Services Linux: AWS Linux 1 to AWS Linux 2
See Edge for Private Cloud supported versions for information on supported
versions of these operating systems.
Note: Apigee does not support an in-place operating system (OS) upgrade. An in-place OS
upgrade may cause Apigee Edge for Private Cloud to stop working due to package and library
incompatibilities.
To upgrade your operating system (OS), you need to install Edge for Private Cloud
in a new data center containing the new version of OS, and then decommission the
old data center, as follows:
1. Set up a new Apigee data center with servers containing the new version of
the OS. See Adding a data center.
2. Install Edge for Private Cloud on the new servers, using a silent configuration
file that expands on the existing installation.Note: If you are upgrading to RHEL 8,
see Prerequisites for RHEL 8.
3. Add the servers to an existing cluster.
4. Finally, decommission the old data center, as explained in Decommissioning
a data center.
● edge-gateway
● edge-management-server
● edge-message-processor
● edge-postgres-server
● edge-qpid-server
● edge-router
● nginx
The following sections describe how to check whether you need to downgrade, and
how to downgrade Apigee components, if necessary.
● @apigee-thirdparty
● If the NGINX version is apigee-nginx.x86_64 1.18.0-XXX, you only
need to downgrade NGINX.
● Less than 20114, you don't need to take any further action.
● Equal to 20114, you need to downgrade Apigee components and downgrade
NGINX.
● Greater than 20114, find your NGINX version by entering the following:
● -- sudo yum list installed apigee-nginx
● Here is some sample output from the command:
Installed Packages
apigee-nginx.x86_64 1.18.0-1.el7
● @apigee-thirdparty
● If the NGINX version is apigee-nginx.x86_64 1.18.0-XXX, you only
need to downgrade NGINX.
Components to downgrade
If you have installed any of the RPMs on the following lists, you need to
downgrade to the previous version of these RPMs.
Components to downgrade for Edge for Private Cloud 4.50.00
edge-gateway-4.50.00-0.0.20113.noarch.rpm
edge-management-server-4.50.00-0.0.20113.noarch.rpm
edge-message-processor-4.50.00-0.0.20113.noarch.rpm
edge-postgres-server-4.50.00-0.0.20113.noarch.rpm
edge-qpid-server-4.50.00-0.0.20113.noarch.rpm
● edge-router-4.50.00-0.0.20113.noarch.rpm
● Components to downgrade for Edge for Private Cloud 4.19.06
edge-gateway-4.19.06-0.0.20114.noarch.rpm
edge-management-server-4.19.06-0.0.20114.noarch.rpm
edge-message-processor-4.19.06-0.0.20114.noarch.rpm
edge-postgres-server-4.19.06-0.0.20114.noarch.rpm
edge-qpid-server-4.19.06-0.0.20114.noarch.rpm
● edge-router-4.19.06-0.0.20114.noarch.rpm
● To check whether these RPMs are installed, on each node where any of the
components in the appropriate list above are installed, enter the following
command for each component:
● -- apigee-service component version
● Downgrade Apigee components
To downgrade Apigee components, use the following procedure.
On each node that has any of the following components installed:
1. edge-gateway
2. edge-management-server
3. edge-message-processor
4. edge-postgres-server
5. edge-qpid-server
6. edge-router
● Stop the component by entering
● --apigee-service component stop
● Then downgrade the components:
● -- sudo yum downgrade
● Here are some examples:
If gateway and edge-message-processor are installed:
● -- sudo yum downgrade edge-gateway edge-message-processor
● If gateway and edge-router installed:
● -- sudo yum downgrade edge-gateway edge-router
● If AIO setup :
● -- sudo yum downgrade edge-gateway edge-postgres-server edge-router
edge-management-server edge-message-processor edge-qpid-server
● Once you are done downgrading, run configure for each component and
re-start it.
--apigee-service component configure
● edge-router-4.50.00-0.0.20110
● Edge for Private Cloud 4.19.06
edge-gateway-4.19.06-0.0.20112
Edge-management-server-4.19.06-0.0.20112
edge-message-processor-4.19.06-0.0.20112
edge-postgres-server-4.19.06-0.0.20112
edge-qpid-server-4.19.06-0.0.20112
● edge-router-4.19.06-0.0.20112
● Downgrade NGINX
To downgrade apigee-nginx, do the following steps for Edge router, one
node at a time:
1. Stop the router.
2. --apigee-service edge-router stop
3. Downgrade apigee-ngix.
4. -- sudo yum downgrade apigee-nginx
5. Expected apigee-nginx version after downgrade:
6. apigee-nginx.x86_64 -1.16.1-6.el7
7. Configure the router.
8. apigee-service edge-router configure
9. Start the router.
10.apigee-service edge-router start
Apigee Support
Apigee Support is here to help. Learn about and access the self-service resources
that will help you diagnose and resolve common problems and optimally use the
Apigee platform.
Tools
Tools to help you monitor, troubleshoot and resolve runtime errors, latency and
performance issues with Apigee Edge
Troubleshooting
Documentation and videos to help you troubleshoot and resolve problems with
Apigee Edge
Apigee Edge playbooks Apigee Adaptor for Envoy Apigee Edge Microgateway
playbooks playbooks
Troubleshoot and resolve errors and problems encountered with Apigee Edge,
Apigee Adaptor for Envoy, and Apigee Edge Microgateway.
Explore documentation, View the changelog of our View the list of current
installation guides, proxy samples,
latest Apigee release. known issues.
tools, and tutorials about Apigee.
The following documents explain how to access and use the Apigee Support Portal,
perform common Support procedures and activities such as creating and managing
cases, escalating cases, and best practices to be followed while creating cases.
Note: This topic applies to existing Apigee Edge Subscription customers with a paid Apigee
support plan.
If you are a new Google Cloud Apigee X customer, see Getting started with Apigee X Support.
If you have a free trial Apigee eval account, you can can post questions and find answers in the
Apigee community, where Apigee engineers are able to respond.
Document Description
● Developer
● Starter
● Basic
● Enterprise
● Mission Critical
To be able to create and manage support cases, you need to have access to a
support portal. The documents in this section describe how to access the Apigee
Edge Support Portal.
If you do not have accesss to the Apigee Edge Support Portal, you are probably an
Apigee X customer and should use the Apigee X Support Portal.
Apigee provides a separate login account to Apigee Support Portal. Generally when
you are onboarded to Apigee, one of your users is granted the Support Portal admin
role. The Support Portal admin has the ability to add additional users to Apigee
Support Portal as needed.
Use one of the following Roles to define the structure of your company. Roles grant
no additional features in the support portal.
● Customer Executive
● Customer Manager
● Customer User
The Profile greatly influences how you interact with and work inside the support
portal. Descriptions for each of the available Profiles are as follows:
If you are an Apigee Support Portal admin of your company, then you can add more
users according to your company's needs using the following steps:
4. The user you want to grant Apigee Support Portal access to must exist as a
contact.
5. If the user does not exist, click New Contact. You will be redirected to enter
the users information.
6. From the New Contact page, enter the required information: First Name, Last
Name, Title, Phone number, and Email address. All other information is
optional.
7. In addition to the mandatory fields, ensure that you choose a value for
Preferred region timezone for support.
8. After filling in all the details, click Save. You have now created the new user's
Contact details.
9. If the user already exists, or after you have created the new user, click their
name to display the Edit Contact window.
10.Click Enable Customer User. This grants the ability for the user to login to
the Support Portal.
11.
Note: Admin privileges enable the user to create contacts and enable other contacts as
support portal users.
13.Click Save. The contact will receive an email from Apigee with their login
details.
Note: Users cannot be removed from the Support Portal account. They can only be
disabled. Disabling the user completely revokes their ability to login and see any
information in your company's support portal.
● Developer
● Starter
● Basic
● Enterprise
● Mission Critical
To be able to create or manage Google Cloud Apigee Support Cases, you will need
to have access to Apigee Support Portal.
Users must have one of the following roles to create or modify support cases:
Creating Cases
8. Click Continue.
9. Enter or select the following details:
● I need help with: Select the value that best suits the issue that you are
observing.
Note:
For information about the different types of issues and guidance on choosing
the right value that best explains the issue for which you need Support team's
assistance, see I need help with.
This field contains information about the different types of issues that may be
encountered with Apigee products or other requirements that you may have,
such as Service requests and Query. Choose the value that best matches why
you are reaching out to Apigee Support.
● Description: Include the following information:
● Precise information about the problem being observed
● Error message
● Details about your product setup
● Diagnostic logs and information
● Steps to reproduce
10. Note: Refer to Support Case Best Practices for guidelines about the key information and
artifacts that need to be provided along with examples.
Sample Case Details
11.Click Submit.
After you submit, you will be redirected to the Case page where you can comment
on the case, upload file attachments, or modify case attributes. The Support team
will respond to the case based on its priority and your Support plan entitlements.
Viewing Cases
4. To filter the list of cases, use Search or the options from the View menu.
5. Click the specific Case Number or Subject from the list on the Cases page to
select it.
Note: Depending on your Support Portal Account profile, you will be able to either view and
manage all the cases created only by you or your company's cases in the Cases page. You can
find out the current status and updates from the Apigee Support team (if any) on the active
cases.
Updating Cases
1. Click the specific Case Number or Subject from the list on the Cases page in
Support Portal to select it.
2. If the case is open, you can add comments, upload file attachments, or edit
case attributes.Note: If the case was closed less than 30 days ago, you can reopen it.
Otherwise, to reactivate a closed case, you need to create a new support case.
3. Once the specific case page is open, click Add Comment to add the
comments and respond back to any questions or information requested by
the Apigee Support team.
4. Click Save after adding the comments.
Attach a file
Sometimes the Apigee Support team may ask you to share artifacts such as
screenshots, tcpdumps, and component diagnostic logs which contain more
information about the issue. You can do this by attaching a file or a Google doc.
To attach a file:
2. Click Choose File to attach the file you would like to attach.
3. Click Attach File for each file added.
4. Repeat the above steps to attach multiple files.
5. Once you have added all the files, click Done.
Note:
● Skipping these steps will result in your files not being correctly attached.
● The maximum size of each file that you can attach is 25 MB. If you want to
attach files larger than 25 MB, then the Support team can provide you with a
Google Drive link where you can add more files. If this is needed, you will need
to request this in your communications with the team in the Support case.
6.
Reference
Case Priority
When creating a support case, it's important to assign it the correct priority. As per
the Apigee Support Specification, the case priority determines the initial response
time for the case.
Note: Response times vary by issue priority and which Support plan you have. For response
times, refer to the Apigee Support Specification.
P4 No ● Enhancement request
Impact
When creating a support case, select a value in I need help with that best describes
the issue that is being observed by you, or that best describes the assistance
required from the Apigee Support team. The following table describes the different
types of issues and their meanings:
Issue Description
Data Transfer You need data associated with your org from
Request Apigee datastore (Analytics/Cassandra)
Deployment Error You are having issues while deploying your API
Proxy and/or received an error message about
partial deployment.
Case status
Once a Support Case is created, you can view the status of the case. The following
table describes the various statuses and their meaning:
Status Description
Escalating cases
Escalate an existing case
If you have a support case that has the appropriate priority, but is not meeting the
expectations, then you can escalate the case. Escalation is a process to get higher
attention for an ongoing Google Cloud Apigee Support Case.
When a Support case is escalated, the Apigee Support team is immediately notified
and will update the case accordingly. You can find out more about our escalation
process document titled Support Escalation Process from Apigee Support Portal.
Note: A Support case can be escalated only after you have not received a response within the
initial response time based on the priority.
Best practices for Google Cloud Apigee
support cases
Providing detailed and required information in the support case makes it easier for
the Google Cloud Apigee Support team to respond to you quickly and efficiently.
When your support case is missing critical details, we need to ask for more
information, which may involve going back and forth multiple times. This takes
more time and can lead to delays in resolution of issues. This Best Practices Guide
lets you know the information we need to resolve your technical support case
faster.
An issue should contain information explaining the details about what happened
versus what was expected to happen, as well as when and how it happened. A good
Apigee support case should contain the following key information for each of the
Apigee products:
Note: For more information about the different Apigee products, see Compare Apigee products.
Product
There are different Apigee products, Apigee Edge on Public Cloud and Apigee Edge
on Private Cloud, so we need specific information about which particular product is
having the issue.
The following table provides some examples showing complete information in the
DOs column, and incomplete information in the DON'Ts column:
DOs DON'Ts
Problem Details
Provide the precise information about the issue being observed including the error
message (if any) and expected and actual behaviour observed.
The following table provides some examples showing complete information in the
DOs column, and incomplete information in the DON'Ts column:
DOs DON'Ts
{"error":"missing_authorization","error_de
scription":"Missing Authorization header"} (The proxy name is
unknown. It is not clear
whether the proxy is
returning an error or
any unexpected
response.)
Our clients are getting 500 errors with the following Our clients are getting
error message while making requests to API proxy: 500 Errors while
making requests to API
proxy.
{"fault":{"faultstring":"Execution of
JSReadResponse failed with error:
Javascript runtime error: \"TypeError: (Just conveying 500
Cannot read property \"content\" from Errors doesn't provide
adequate information
undefined.
for us to investigate
(JSReadResponse.js:23)","detail": the issue. We need to
{"errorcode":"steps.javascript.ScriptExecu know the actual error
tionFailed"}}} message and error
code that is being
observed.)
Time
Time is a very critical piece of information. It is important for the Support Engineer
to know when you first noticed this issue, how long it lasted, and if the issue is still
going on.
The Support Engineer resolving the issue may not be in your timezone, so relative
statements about time make the problem harder to diagnose. Hence, it is
recommended to use the ISO 8601 format for the date and time stamp to provide
the exact time information on when the issue was observed.
The following table provides some examples showing accurate time and duration
for which the problem occurred in the DOs column, and ambiguous or unclear
information on when the problem occurred in the DON'Ts column:
DOs DON'Ts
Setup
We need to know the details about where exactly you are seeing the issue.
Depending on the product you are using, we need the following information:
● If you are using Apigee Cloud, you may have more than one organization, so
we need to know the specific org and other details where you are observing
the issue:
● Organization and Environment names
● API proxy name and revision numbers (for API request failures)
● If you are using Private Cloud, you may be using one of the many supported
installation topologies. So we need to know what topology you are using,
including the details such as number of data centres and nodes.
The following table provides some examples showing complete information in the
DOs column, and incomplete information in the DON'Ts column:
DOs DON'Ts
401 Errors have increased on Edge Public Cloud since 401 Errors have
2020-11-06 09:30 CST. increased.
Error:
{"fault":{"faultstring":"Failed to resolve
API Key variable request.header.X-APP-
API_KEY","detail":
{"errorcode":"steps.oauth.v2.FailedToResolv
eAPIKey"}}}
Useful artifacts
Providing us with artifacts related to the issue will speed up the resolution, as it
helps us understand the exact behavior you are observing and get more insights
into it.
Caution: Do not share any confidential or sensitive information such as username, password,
API Keys, OAuth tokens, or private key associated with your TLS certificates in the case
description, updates or file attachments.
This section describes some useful artifacts that are helpful for all Apigee
products:
The following artifacts are useful for all Apigee products: Apigee Edge on Public
Cloud and Apigee Edge on Private Cloud:
Artifact Description
For Apigee Edge for Private Cloud, we may need some additional artifacts that will
facilitate faster diagnosis of issues.
Artifact Description
This section provides case templates and sample cases for different products
based on the best practices described in this document:
SAMPLE CASE
This section provides a sample template for Apigee Edge on Public Cloud.
Problem:
Error message:
Steps to reproduce:
Diagnostic information:
SAMPLE CASE
This section provides a sample template for Apigee Edge for Private Cloud.
Problem:
Error message:
<Attach the network topology describing the setup of your Private Cloud including
data centers and nodes>
Steps to reproduce:
Diagnostic information
You can find instructions on how to access Apigee Support Portal in Apigee
Support Portal login page, click Forgot your password?. Follow the instructions to
reset your password.
Roles define the structure of your company and can be one of the following:
● Customer Executive
● Customer Manager
● Customer User
Read more about the Support portal roles in Support Portal user roles.
What are the different types of profiles that are available in Apigee Support Portal?
What permissions does each of these profiles have?
Read more about the Support portal roles in Support Portal user roles.
Why am I seeing only the Cases created by me and not the cases created by my
team?
You likely have the role Overage Customer Portal Manager - User which allows you
to see only your cases. Your portal administrator can grant you the Overage
Customer Portal Manager - SU role which allows you to see cases opened by other
people in your company.
How can I see what role/permissions I have?
The Support portal administrator of your company will be able to tell you what role
you have been assigned. You can also infer this from the permissions available to
you based on the role that you have. Available permissions and their respective
abilities are described in Support Portal user roles.
Yes. You can find instructions on how to add new Support Portal users in Adding
more users to the support portal.
One of the employees has left the company, can we remove that user from Apigee
Support Portal?
Yes. You can find instructions to remove the user in Remove users from Support
Portal.
I am a Partner and would be helping in creating and managing cases for an Apigee
customer. Can I get access to Apigee Support Portal?
Yes. The Support Portal admin from the customer side can add the relevant Partner
team members to Apigee Support Portal.
Is there any limit on the number of Support Portal users that can be added for a
particular customer account? If the customer requests an increase in the number,
do you charge them?
Once the customer nominates a partner to raise cases on their behalf, is there a
limit to the number of partner people who could raise cases? Again, if they want to
increase the number of partner people to have access to the support portal, is there
a cost?
We do not impose a limit on the number of support users. We recommend that you
only add users that are knowledgeable of and work on the Apigee product in your
company. Additionally, it is best to add users with knowledge of your specific
implementation. There is no additional charge for additional users.
Getting support
I have a free trial account, can I get access to Apigee Support Portal and raise
Support Cases?
Unfortunately no. However, if you do have questions about or issues with Apigee
products, you can find answers to your questions, and can post questions in the
Apigee community, where Apigee Engineers are able to respond.
Introduction to antipatterns
This section is about common antipatterns that are observed as part of the API
proxies deployed on the Apigee Edge platform.
The good news is that each of these antipatterns can be clearly identified and
rectified with appropriate good practices. Consequently, the APIs deployed on Edge
would serve their intended purpose and be more performant.
Summary of antipatterns
The following table lists the antipatterns in this section:
Category Antipatterns
Policy antipatterns ● Use waitForComplete() in JavaScript
code
● Set a long expiration time for OAuth
tokens
● Use greedy quantifiers in the
RegularExpressionProtection policy
● Cache error responses
● Store data greater than 512kb size in
cache
● Log data to third party servers using the
JavaScript policy
● Invoke MessageLogging multiple times
in an API proxy
● Configure non-distributed quota
● Reuse a Quota policy
● Use the RaiseFault policy under
inappropriate conditions
● Access multi-value HTTP headers
incorrectly in an API Proxy
● Use Service Callout to invoke backend
service in no target proxy
● Use of any OAuth V1 policies, which are
deprecated. See Deprecations.
In addition to the links above, you can also download the antipatterns in eBook
format:
What is an antipattern?
Wikipedia defines a software antipattern as:
format_quote
In software engineering, an anti-pattern is a pattern that may be commonly used but
is ineffective and/or counterproductive in practice.
format_quote
Simply put, an antipattern is something that the software allows it’s "user" to do,
but is something that may have adverse functional, serviceable, or performance
impact.
In objected oriented parlance, a god class is a class that controls too many classes
for a given application.
As the image illustrates, the god class uses and references too many classes.
The framework on which the application was developed does not prevent the
creation of such a class, but it has many disadvantages, the primary ones being:
● Hard to maintain
● Single point of failure when the application runs
Target audience
This section best serves Apigee Edge developers as they progress through the
lifecycle of designing and developing API proxies for their services. It should
ideally be used as a reference guide during the API development lifecycle and
during troubleshooting.
HTTP Client
A powerful feature of the Javascript policy is the HTTP client. The HTTP client (or
the httpClient object) can be used to make one or multiple calls to backend or
external services. The HTTP client is particularly useful when there's a need to
make calls to multiple external services and mash up the responses in a single API.
The httpClient object exposes two methods get and send (send is used in the
above sample code) to make HTTP requests. Both methods are asynchronous and
return an exchange object before the actual HTTP request is completed.
The HTTP requests may take a few seconds to a few minutes. After an HTTP
request is made, it's important to know when it is completed, so that the response
from the request can be processed. One of the most common ways to determine
when the HTTP request is complete is by invoking the exchange object's
waitForComplete() method.
waitForComplete()
The waitForComplete() method pauses the thread until the HTTP request
completes and a response (success/failure) is returned. Then, the response from a
backend or external service can be processed.
Antipattern
Using waitForComplete() after sending an HTTP request in JavaScript code will
have performance implications.
response = exchangeObj.getResponse();
context.setVariable('example.status', response1.status);
} else {
error = exchangeObj.getError();
context.setVariable('example.error', 'Woops: ' + error);
}
In this example:
There's an upper limit on the number of threads (30%) that can concurrently
execute JavaScript code on a Message Processor at any time. After that limit is
reached, there will not be any threads available to execute the JavaScript code. So,
if there are too many concurrent requests executing the waitForComplete() API
in the JavaScript code, then subsequent requests will fail with a 500 Internal Server
Error and 'Timed out' error message even before the JavaScript policy times out.
In general, this scenario may occur if the backend takes a long time to process
requests or there is high traffic.
Impact
1. The API requests will fail with 500 Internal Server Error and with error
message 'Timed out' when the number of concurrent requests executing
waitForComplete() in the JavaScript code exceeds the predefined limit.
2. Diagnosing the cause of the issue can be tricky as the JavaScript fails with
'Timed out' error even though the time limit for the specific JavaScript policy
has not elapsed.
Best Practice
Use callbacks in the HTTP client to streamline the callout code and improve
performance and avoid using waitForComplete() in JavaScript code. This
method ensures that the thread executing JavaScript is not blocked until the HTTP
request is completed.
When a callback is used, the thread sends the HTTP requests in the JavaScript
code and returns back to the pool. Because the thread is no longer blocked, it is
available to handle other requests. After the HTTP request is completed and the
callback is ready to be executed, a task will be created and added to the task
queue. One of the threads from the pool will execute the callback based on the
priority of the task.
function onComplete(response,error) {
// Check if the HTTP request was successful
if (response) {
context.setVariable('example.status', response.status);
} else {
context.setVariable('example.error', 'Woops: ' + error);
}
}
// Specify the callback Function as an argument
httpClient.get("http://example.com", onComplete);
Further reading
● JavaScript policy
● JavaScript callback support in httpClient for improved callouts
● Making JavaScript callouts with httpClient
Refresh tokens are optionally issued along with access tokens with some of the
grant types. Refresh tokens are used to obtain new, valid access tokens after the
original access token has expired or been revoked. The expiry time for refresh
tokens can also be set in the OAuthv2 policy.
Antipattern
Setting a long expiration time for an access token and/or refresh token in the
OAuthv2 policy leads to accumulation of OAuth tokens and increased disk space
use on Cassandra nodes.
The following example OAuthV2 policy shows a long expiration time of 200 days
for refresh tokens:
<OAuthV2 name="GenerateAccessToken">
<Operation>GenerateAccessToken</Operation>
<ExpiresIn>1800000</ExpiresIn> <!-- 30 minutes -->
<RefreshTokenExpiresIn>17280000000</RefreshTokenExpiresIn> <!-- 200 days
-->
<SupportedGrantTypes>
<GrantType>password</GrantType>
</SupportedGrantTypes>
<GenerateResponse enabled="true"/>
</OAuthV2>
In the above example:
● The access token is set with a reasonably lower expiration time of 30 mins.
● The refresh token is set with a very long expiration time of 200 days.
● If the traffic to this API is 10 requests/second, then it can generate as many
as 864,000 tokens in a day.
● Since the refresh tokens expire only after 200 days, they persist in the data
store (Cassandra) for a long time leading to continuous accumulation.
Impact
● Leads to significant growth of disk space usage on the data store
(Cassandra).
● For Private Cloud users, this could increase storage costs, or in the worst
case, the disk could become full and lead to runtime errors or an outage.
Best Practice
Use an appropriate lower expiration time for OAuth access and refresh tokens
depending on your specific security requirements, so that they get purged quickly
and thereby avoid accumulation.
Set the expiration time for refresh tokens in such a way that it is valid for a little
longer period than the access tokens. For example, if you set 30 minutes for access
token and then set 60 minutes for refresh token.
● There's ample time to use a refresh token to generate new access and
refresh tokens after the access token is expired.
● The refresh tokens will expire a little while later and can get purged in a
timely manner to avoid accumulation.
Further reading
● OAuth 2.0 framework
● Secure APIs with OAuth
● Edge Product Limits
Refresh tokens are optionally issued along with access tokens with some of the
grant types. Refresh tokens are used to obtain new, valid access tokens after the
original access token has expired or been revoked. The expiration time for refresh
tokens can also be set in the OAuthv2 policy.
This antipattern is related to the antipattern of setting a long expiration time for
OAuth tokens.
Antipattern
Setting no expiration time for a refresh token in the OAuthv2 policy leads to
accumulation of OAuth tokens and increased disk space use on Cassandra nodes.
<OAuthV2 name="GenerateAccessToken">
<Operation>GenerateAccessToken</Operation>
<ExpiresIn>1800000</ExpiresIn> <!-- 30 minutes -->
<!--<RefreshTokenExpiresIn> is missing -->
<SupportedGrantTypes>
<GrantType>password</GrantType>
</SupportedGrantTypes>
<GenerateResponse enabled="true"/>
</OAuthV2>
● The access token is set with a reasonably low expiration time of 30 minutes.
● The refresh token's expiration is not set.
● Refresh token persists in data store (Cassandra) forever, causing data
accumulation.
● A refresh token minted without an expiration can be used indefinitely to
generate access tokens.
● If the traffic to this API is 10 requests per second, it can generate as many as
864,000 tokens in a day.
Impact
● If the refresh token is created without an expiration, there are two major
consequences:
● Refresh token can be used at any time in the future, possibly for
years, to obtain an access token. This can have security implications.
● The row in Cassandra containing the refresh token will never be
deleted. This will cause data to accumulate in Cassandra.
● If you don't use the refresh token to obtain a fresh access token, but instead
create a fresh refresh token and access token, the older refresh token will
remain in Cassandra. As a result, refresh tokens will continue accumulating
in Cassandra, further adding to bloat, increased disk usage, and heavier
compactions, and will eventually cause read/write latencies in Cassandra.
Best Practice
Use an appropriately low expiration time for both refresh and access tokens. See
best practice for setting expiration times of refresh and access tokens. Make sure
to specify an expiration configuration for both access and refresh token in policy.
Refer to the OauthV2 policy documentation for further details about policy
configuration.
● kms.oauth_20_access_tokens.oauth_20_access_tokens_organization_name_
idx#f0f0f0 and
● kms.oauth_20_access_tokens.oauth_20_access_tokens_status_idx
Further reading
● OAuth 2.0 framework
● Secure APIs with OAuth
● Edge Product Limits
The regular expressions can be defined for request paths, query parameters, form
parameters, headers, XML elements (in an XML payload defined using XPath),
JSON object attributes (in a JSON payload defined using JSONPath).
Antipattern
The default quantifiers (*, +, and ?) are greedy in nature: they start to match with
the longest possible sequence. When no match is found, they backtrack gradually
to try to match the pattern. If the resultant string matching the pattern is very short,
then using greedy quantifiers can take more time than necessary. This is especially
true if the payload is large (in the tens or hundreds of KBs).
The following example expression uses multiple instances of .*, which are greedy
operators:
<Pattern>.*Exception in thread.*</Pattern>
Reluctant quantifiers (like X*?, X+?, X??) start by trying to match a single
character from the beginning of the payload and gradually add characters.
Possessive quantifiers (like X?+, X*+, X++) try to match the entire payload only
once.
Impact
Using greedy quantifiers like wildcards (*) with the RegularExpressionProtection
policy can lead to:
● An increase in overall latency for API requests for a moderate payload size
(up to 1MB)
● Longer time to complete the execution of the RegularExpressionProtection
policy
● API requests with large payloads (>1MB) failing with 504 Gateway Timeout
Errors if the predefined timeout period elapses on the Edge Router
● High CPU utilization on Message Processors due to large amount of
processing which can further impact other API requests
Best practice
● Avoid using greedy quantifiers like .* in regular expressions with the
RegularExpressionProtection policy. Instead, use reluctant quantifiers like
.*? or possessive quantifiers like .*+ (less commonly) wherever possible.
Further reading
● Regular Expression Quantifiers
● RegularExpressionProtection policy
Antipattern: Cache error responses
Caching is a process of storing data temporarily in a storage area called cache for
future reference. Caching data brings significant performance benefits because it:
Whenever we have to frequently access some data that doesn’t change too often,
we highly recommend to use a cache to store this data.
Apigee Edge provides the ability to store data in a cache at runtime for persistence
and faster retrieval. The caching feature is made available through the
PopulateCache policy, LookupCache policy, InvalidateCache policy, and
ResponseCache policy.
In this section, let’s look at Response Cache policy. The Response Cache policy in
Apigee Edge platform allows you to cache the responses from backend servers. If
the client applications make requests to the same backend resource repeatedly and
the resource gets updated periodically, then we can cache these responses using
this policy. The Response Cache policy helps in returning the cached responses
and consequently avoids forwarding the requests to the backend servers
unnecessarily.
Antipattern
The ResponseCache policy lets you cache HTTP responses with any possible
Status code, by default. This means that both success and error responses can be
cached.
The Response Cache policy caches error responses in its default configuration.
However, it is not advisable to cache error responses without considerable thought
on the adverse implications because:
In all these cases, the failures could occur for a unknown time period and then we
may start getting successful responses. If we cache the error responses, then we
may continue to send error responses to the users even though the problem with
the backend server has been fixed.
With this information, we can set the cache expiration time appropriately in the
Response Cache policy so that we don’t cache the error responses for a longer
time. However, once the backend server/resource is available again, we will have to
modify the policy to avoid caching error responses. This is because if there is a
temporary/one off failure from the backend server, we will cache the response and
we will end up with the problem explained in scenario 1 above.
Impact
● Caching error responses can cause error responses to be sent even after the
problem has been resolved in the backend server
● Users may spend a lot of effort troubleshooting the cause of an issue
without knowing that it is caused by caching the error responses from the
backend server
Best practice
● Don’t store the error responses in the response cache. Ensure that the
<ExcludeErrorResponse> element is set to true in the ResponseCache
policy to prevent error responses from being cached as shown in the below
code snippet. With this configuration only the responses for the default
success codes 200 to 205 (unless the success codes are modified) will be
cached.
● <!-- /antipatterns/examples/1-2.xml -->
● <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
● <ResponseCache async="false" continueOnError="false" enabled="true"
name="TargetServerResponseCache">
<DisplayName>TargetServerResponseCache</DisplayName>
<CacheKey>
<KeyFragment ref="request.uri" />
</CacheKey>
<Scope>Exclusive</Scope>
<ExpirySettings>
<TimeoutinSec ref="flow.variable.here">600</TimeoutinSec>
</ExpirySettings>
<CacheResource>targetCache</CacheResource>
<ExcludeErrorResponse>true</ExcludeErrorResponse>
</ResponseCache>
● If you have the requirement to cache the error responses for some specific
reason, then you can determine the maximum/exact duration of time for
which the failure will be observed (if possible):
● Set the Expiration time appropriately to ensure that you don’t cache
the error responses longer than the time for which the failure can be
seen.
● Use the ResponseCache policy to cache the error responses without
the <ExcludeErrorResponse> element.
● Do this only if you are absolutely sure that the backend server failure is not
for a brief/temporary period.
● Apigee does not recommend caching 5xx responses from backend servers.
NOTE
After the failure has been fixed, remove/disable the ResponseCache policy. Otherwise, the error
responses for temporary backend server failures may get cached.
Apigee Edge provides the ability to store data in a cache at runtime for persistence
and faster retrieval.
Antipattern
This particular antipattern talks about the implications of exceeding the current
cache size restrictions within the Apigee Edge platform.
● API requests executed for the first time on each of the Message Processors
need to get the data independently from the original source (policy or a
target server), as entries > 512 KB are not available in L2 cache.
● Storing larger data (> 512 KB) in L1 cache tends to put more stress on the
platform resources. It results in the L1 cache memory being filled up faster
and hence lesser space being available for other data. As a consequence,
one will not be able to cache the data as aggressively as one would like to.
● Cached entries from the Message Processors will be removed when the limit
on the number of entries is reached. This causes the data to be fetched from
the original source again on the respective Message Processors.
Impact
● Data of size > 512 KB will not be stored in L2/persistent cache.
● More frequent calls to the original source (either a policy or a target server)
leads to increased latencies for the API requests.
Best practice
● It is preferred to store data of size < 512 KB in cache to get optimum
performance.
● If there’s a need to store data > 512 KB, then consider:
● Using any appropriate database for storing large data
OR
● Compressing the data
Further reading
● Cache internals
Antipattern: Log data to third party
servers using the JavaScript policy
Logging is one of the efficient ways for debugging problems. The information about
the API request such as headers, form params, query params, dynamic variables,
etc. can all be logged for later reference. The information can be logged locally on
the Message Processors (Apigee Edge for Private Cloud only) or to third party
servers such as Sumo Logic, Splunk, or Loggly.
Antipattern
In the code below, the httpClient object in a JavaScript policy is used to log the
data to a Sumo Logic server. This means that the process of logging actually takes
place during request/response processing. This approach is counterproductive
because it adds to the processing time, thereby increasing overall latencies.
LogData_JS:
LogData.js:
The culprit here is the call to waitForComplete(), which suspends the caller's
operations until the logging process is complete. A valid pattern is to make this
asynchronous by eliminating this call.
Impact
● Executing the logging code via the JavaScript policy adds to the latency of
the API request.
● Concurrent requests can stress the resources on the Message Processor
and consequently adversely affecting the processing of other requests.
Best Practice
● Use the MessageLogging policy to transfer/log data to Log servers or third
party log management services such as Sumo Logic, Splunk, Loggly, etc.
● The biggest advantage of the Message Logging policy is that it can be
defined in the Post Client Flow, which gets executed after the response is
sent back to the requesting client app.
● Having this policy defined in Post Client flow helps separate the logging
activity and the request/response processing thereby avoiding latencies.
● If there is a specific requirement or a reason to do logging via JavaScript
(not the Post Client flow option mentioned above), then it needs to be done
asynchronously. In other words, if you are using a httpClient object, you
would need to ensure that the JavaScript does NOT implement
waitForComplete
Further reading
● JavaScript policy
● MessageLogging policy
● Community article: Using httpClient from within JavaScript callout, is
waitForComplete() necessary?
Antipattern
The MessageLogging policy provides an efficient way to get more information
about an API request and debugging any issues that are encountered with the API
request. However, using the same MessageLogging policy more than once or
having multiple MessageLogging policies log data in chunks in the same API proxy
in flows other than the PostClientFlow may have adverse implications. This is
because Apigee Edge opens a connection to an external syslog server for a
MessageLogging policy. If the policy uses TLS over TCP, there is additional
overhead of establishing a TLS connection.
API proxy
In the following example policy configurations, the data is being logged to third-
party log servers using TLS over TCP. If more than one of these policies is used in
the same API proxy, the overhead of establishing and managing TLS connections
would occupy additional system memory and CPU cycles, leading to performance
issues at scale.
LogRequestInfo policy
LogResponseInfo policy
Impact
● Increased networking overhead due to establishing connections to the log
servers multiple times during API Proxy flow.
● If the syslog server is slow or cannot handle the high volume caused by
multiple syslog calls, then it will cause back pressure on the message
processor, resulting in slow request processing and potentially high latency
or 504 Gateway Timeout errors.
● Increased number of concurrent file descriptors opened by the message
processor on Private Cloud setups where file logging is used.
● If the MessageLogging policy is placed in flows other than PostClient flow,
there's a possibility that the information may not be logged, as the
MessageLogging policy will not be executed if any failure occurs before the
execution of this policy.
In the previous ProxyEndpoint example, the information will not be logged
under the following circumstances:
● If any of the policies placed ahead of the LogRequestInfo policy in the
request flow fails.
or
● If the target server fails with any error (HTTP 4XX, 5XX). In this
situation, when a successful response isn't returned, the
LogResponseInfo policy won't get executed.
● In both cases, the LogErrorInfo policy will get executed and log only the
error-related information.
Best practice
● Use an ExtractVariables policy or JavaScript policy to set all the flow
variables that are to be logged, making them available for the
MessageLogging policy.
● Use a single MessageLogging policy to log all the required data in the
PostClientFlow, which gets executed unconditionally.
● Use the UDP protocol, where guaranteed delivery of messages to the syslog
server is not required and TLS/SSL is not mandatory.
Here's an example of a MessageLogging policy, LogInfo, that logs all the data:
Further reading
● JavaScript policy
● ExtractVariables policy
● Having code execute after proxy processing, including when faults occur,
with the PostClientFlow
Antipattern
An API proxy request can be served by one or more distributed Edge components
called Message Processors. If there are multiple Message Processors configured
for serving API requests, then the quota will likely be exceeded because each
Message Processor keeps it’s own "count" of the requests it processes.
Let's explain this with the help of an example. Consider the following Quota policy
for an API proxy:
The above configuration should allow a total of 100 requests per hour.
In practice, however, when multiple message processors are serving the API
requests, the following happens:
In the above illustration:
Impact
This situation defeats the purpose of the quota configuration and can have
detrimental effects on the backend servers that are serving the requests.
Best Practice
Consider, setting the <Distributed> element to true in the Quota policy to
ensure that a common counter is used to track the API requests across all Message
Processors. The <Distributed> element can be set as shown in the code snippet
below:
Antipattern
If a Quota policy is reused, then the quota counter will be decremented each time
the Quota policy gets executed irrespective of where it is used. That is, if a Quota
policy is reused:
Then the quota counter gets decremented each time it is executed and we will end
up getting Quota violation errors much earlier than the expected for the specified
interval of time.
API Proxy
Quota policy
Since the traffic quota is same for both the target servers, we define single Quota
policy named “Quota-Minute-Target-Server” as shown below:
Target endpoints
4 6 10
# Requests
A little later, we get the 11th API request with the resource path as /target-us,
let’s say after 32 seconds.
We expect the request to go through successfully assuming that we still have 6 API
requests for the target endpoint target-us as per the quota allowed.
Reason: Since we are using the same Quota policy in both the target endpoints, a
single quota counter is used to track the API requests hitting both the target
endpoints. Thus we exhaust the quota of 10 requests per minute collectively rather
than for the individual target endpoint.
Impact
This antipattern can result in a fundamental mismatch of expectations, leading to a
perception that the quota limits got exhausted ahead of time.
Best practice
● Use the <Class> or <Identifier> elements to ensure multiple, unique
counters are maintained by defining a single Quota policy. Let's redefine the
Quota policy “Quota-Minute-Target-Server” that we just explained in the
previous section by using the header target_id as the <Identifier> for
as shown below:
● <!-- /antipatterns/examples/1-11.xml -->
● <Quota name="Quota-Minute-Target-Server">
● <Interval>1</Interval>
<TimeUnit>minute</TimeUnit>
<Allow count="10"/>
<Identifier ref="request.header.target_id"/>
<Distributed>true</Distributed>
</Quota>
● We will continue to use this Quota policy in both the target endpoints
“Target-US” and “Target-EU” as before.
● Now let’s say if the header target_id has a value “US” then the
requests are routed to the target endpoint “Target-US”.
● Similarly if the header target_id has a value “EU” then the requests
are routed to the target endpoint “Target-EU”.
● So even if we use the same Quota policy in both the target endpoints,
separate quota counters are maintained based on the <Identifier>
value.
● Therefore, by using the <Identifier> element we can ensure that
each of the target endpoints get the allowed quota of 10 requests.
● Use separate Quota policy in each of the flows/target endpoints/API Proxies
to ensure that you always get the allowed count of API requests. Let’s now
look at the same example used in the above section to see how we can
achieve the allowed quota of 10 requests for each of the target endpoints.
● Define a separate Quota policy, one each for the target endpoints
“Target-US” and “Target-EU”
Quota policy for Target Endpoint “Target-US”:
● <!-- /antipatterns/examples/1-12.xml -->
● <Quota name="Quota-Minute-Target-Server-US">
● <Interval>1</Interval>
<TimeUnit>minute</TimeUnit>
<Distributed>true</Distributed>
<Allow count="10"/>
</Quota>
You can use the RaiseFault policy to treat specific conditions as errors, even if an
actual error has not occurred in another policy or in the backend server of the API
proxy. For example, if you want the proxy to send a custom error message to the
client app whenever the backend response body contains the string unavailable,
then you could invoke the RaiseFault policy as shown in the code snippet below:
The RaiseFault policy's name is visible as the fault.name in API Monitoring and
as x_apigee_fault_policy in the Analytics and Router access logs. This helps
to diagnose the cause of the error easily.
Antipattern
Using the RaiseFault policy within FaultRules after another policy has already
thrown an error
Consider the example below, where an OAuthV2 policy in the API Proxy flow has
failed with an InvalidAccessToken error. Upon failure, Edge will set the
fault.name as InvalidAccessToken, enter into the error flow, and execute any
defined FaultRules. In the FaultRule, there is a RaiseFault policy named
RaiseFault that sends a customized error response whenever an
InvalidAccessToken error occurs. However, the use of RaiseFault policy in a
FaultRule means that the fault.name variable is overwritten and masks the true
cause of the failure.
As in the first scenario, the key fault variables fault.name, fault.code, and
fault.policy are overwritten with the RaiseFault policy's name. This behavior
makes it almost impossible to determine which policy actually caused the failure
without accessing a trace file showing the failure or reproducing the issue.
Using RaiseFault policy to return a HTTP 2xx response outside of the error
flow.
The intent is to return the response to the API client immediately without
processing other policies. However, this will lead to misleading analytics data
because the fault variables will contain the RaiseFault policy's name, making the
proxy more difficult to debug. The correct way to implement the desired behavior is
to use Flows with special conditions, as described in Adding CORS support.
Impact
Use of the RaiseFault policy as described above results in overwriting key fault
variables with the RaiseFault policy's name instead of the failing policy's name. In
Analytics and NGINX Access logs, the x_apigee_fault_code and
x_apigee_fault_policy variables are overwritten. In API Monitoring, the
Fault Code and Fault Policy are overwritten. This behavior makes it difficult
to troubleshoot and determine which policy is the true cause of the failure.
In the screenshot below from API Monitoring, you can see that the Fault Code and
Fault policy were overwritten to generic RaiseFault values, making it impossible
to determine the root cause of the failure from the logs:
Best Practice
When an Edge policy raises a fault and you want to customize the error response
message, use the AssignMessage or JavaScript policies instead of the RaiseFault
policy.
The RaiseFault policy should be used in a non-error flow. That is, only use
RaiseFault to treat a specific condition as an error, even if an actual error has not
occurred in a policy or in the backend server of the API Proxy. For example, you
could use the RaiseFault policy to signal that mandatory input parameters are
missing or have incorrect syntax.
You can also use RaiseFault in a fault rule if you want to detect an error during the
processing of a fault. For example, your fault handler itself could cause an error
that you want to signal by using RaiseFault.
Further reading
● Handling Faults
● RaiseFault policy
● Community discussion on Fault Handling Patterns
Antipattern: Access multi-value HTTP
headers incorrectly in an API Proxy
The HTTP headers are the name value pairs that allows the client applications and
backend services to pass additional information about requests and responses
respectively. Some simple examples are:
The HTTP Headers can have one or more values depending on the header field
definitions. A multi-valued header will have comma separated values. Here are a
few examples of headers that contain multiple values:
Apigee Edge allows the developers to access headers easily using flow variables in
any of the Edge policies or conditional flows. Here are the list of variables that can
be used to access a specific request or response header in Edge:
Flow variables:
● message.header.header-name
● request.header.header-name
● response.header.header-name
● message.header.header-name.N
● request.header.header-name.N
● response.header.header-name.N
Javascript objects:
● context.proxyRequest.headers.header-name
● context.targetRequest.headers.header-name
● context.proxyResponse.headers.header-name
● context.targetResponse.headers.header-name
Here's a sample AssignMessage policy showing how to read the value of a request
header and store it into a variable:
Antipattern
Accessing the values of HTTP headers in Edge policies in a way that returns only
the first value is incorrect and can cause issues if the specific HTTP Header(s) has
more than one value.
Consider that the Accept header has multiple values as shown below:
Here's the JavaScript code that reads the value from Accept header:
The above JavaScript code returns only the first value from the Accept header,
such as text/html.
Here's the part of code from AssignMessage or RaiseFault policy setting the
Access-Control-Allow-Headers header:
<Set>
<Headers>
<Header name="Access-Control-Allow-Headers">{request.header.Access-
Control-Request-Headers}</Header>
</Headers>
</Set>
The above code sets the Header Access-Control-Allow-Headers with only the
first value from the request header Access-Control-Allow-Headers, in this
example content-type.
Impact
1. In both the examples above, notice that only the first value from multi-valued
headers are returned. If these values are subsequently used by another
policy in the API Proxy flow or by the backend service to perform some
function or logic, then it could lead to an unexpected outcome or result.
2. When request header values are accessed and passed onto the target server,
API requests could be processed by the backend incorrectly and hence they
may give incorrect results.
3. If the client application is dependent on specific header values from the
Edge response, then it may also process incorrectly and give incorrect
results.
Best Practice
1. Use the appropriate built-in flow variables:
request.header.header_name.values.count,
request.header.header_name.N,
response.header.header_name.values.count,
response.header.header_name.N.
Then iterate to fetch all the values from a specific header in JavaScript or
Java callout policies.
Example: Sample JavaScript code to read a multi-value header
2. for (var i = 1; i <=context.getVariable('request.header.Accept.values.count');
i++)
3. {
4. print(context.getVariable('request.header.Accept.' + i));
}
5. Note: Edge splits the header values considering comma as a delimiter and not through
other delimiters like semicolon etc.
6. For example application/xml;q=0.9, */*;q=0.8 will appear as one
value with the above code.
7. If the header values need to be split using semicolon as a delimiter, then use
string.split(";") to separate these into values.
8. Use the substring() function on the flow variable
request.header.header_name.values in RaiseFault or AssignMessage
policy to read all the values of a specific header.
Example: Sample RaiseFault or AssignMessage Policy to read a multi-value
header
9. <Set>
10. <Headers>
11. <Header name="Access-Control-Allow-
Headers">{substring(request.header.Access-Control-Request-
Headers.values,1,-1)}</Header>
</Headers>
</Set>
Further reading
● How to handle multi-value headers in Javascript?
● List of HTTP header fields
● Request and response variables
● Flow Variables
An API proxy is a managed facade for backend services. A basic API proxy
configuration consists of a ProxyEndpoint (defining the URL of the API proxy) and a
TargetEndpoint (defining the URL of the backend service).
Apigee Edge offers a lot of flexibility for building sophisticated behavior on top of
this pattern. For example, you can add policies to control the way the API
processes a client request before sending it to the backend service, or manipulate
the response received from the backend service before forwarding it to the client.
You can invoke other services using service callout policies, add custom behavior
by adding Javascript code, and even create an API proxy that doesn't invoke a
backend service.
Antipattern
Using service callouts to invoke a backend service in an API proxy with no routes to
a target endpoint is technically feasible, but results in the loss of analytics data
about the performance of the external service.
An API proxy that does not contain target routes can be useful in cases where you
do not need to forward the request message to the TargetEndpoint. Instead, the
ProxyEndpoint performs all necessary processing. For example, the ProxyEndpoint
could retrieve data from a lookup to the API service's key/value store and return the
response without invoking a backend service.
<RouteRule name="noroute"/>
A proxy using a null route is a "no target" proxy, because it does not invoke a target
backend service.
Impact
● Analytics information on the interaction with the external service ( error
codes, response time, target performance, etc.) is unavailable
● Any specific logic required before or after invoking the service callout is
included as part of the overall proxy logic, making it harder to understand
and reuse.
Best Practice
If an API proxy interacts with only a single external service, the proxy should follow
the basic design pattern, where the backend service is defined as the target
endpoint of the API proxy. A proxy with no routing rules to a target endpoint should
not invoke a backend service using the ServiceCallout policy.
The following proxy configuration implements the same behavior as the example
above, but follows best practices:
Further reading
● Understanding APIs and API proxies
● How to configure Route Rules
● Null Routes
● Service Callout policy
One of the unique and useful features of Apigee Edge is the ability to wrap a
NodeJS application in an API Proxy. This allows developers to create event-driven
server-side applications using Edge.
Antipattern
Deployment of API Proxies is the process of making them available to serve API
requests. Each of the deployed API Proxies is loaded into Message Processor’s
runtime memory to be able to serve the API requests for the specific API Proxy.
Therefore, the runtime memory usage increases with the increase in the number of
deployed API Proxies. Leaving any unused API Proxies deployed can cause
unnecessary use of runtime memory.
The platform launches a “Node app” for every deployed NodeJS API Proxy. A Node
app is akin to a standalone node server instance on the Message Processor JVM
process.
In effect, for every deployed NodeJS API Proxy, Edge launches a node server each,
to process requests for the corresponding proxies. If the same NodeJS API Proxy is
deployed in multiple environments, then a corresponding node app is launched for
each environment. In situations where there are a lot of deployed but unused
NodeJS API Proxies, multiple Node apps are launched. Unused NodeJS proxies
translate to idle Node apps which consume memory and affect start up times of the
application process.
# Node
# # Deployed # nodeapps # # Deployed
apps
Proxies Environments Launched Proxies Environments
Launched
In the illustration above, 36 unused nodeapps are launched, which uses up system
memory and has an adverse effect on start up times of the process.
Impact
Best practice
Further reading
Sometimes we may need to use one or more of these services from API Proxies at
runtime. This is because the entities such as KeyValueMaps, OAuth Access
Tokens, API Products, Developer Apps, Developers, Consumer Keys, etc. contain
useful information in the form of key-value pairs, custom attributes or as part of its
profile.
For instance, you can store the following information in KeyValueMap to make it
more secure and accessible at runtime:
Similarly, you may want to get the list of API Products or developer’s email address
at runtime. This information will be available as part of the Developer Apps profile.
All this information can be effectively used at runtime to enable dynamic behaviour
in policies or custom code within Apigee Edge.
Antipattern
The management APIs are preferred and useful for administrative tasks and should
not be used for performing any runtime logic in API Proxies flow. This is because:
In the code sample below, the management API call is made via the custom
JavaScript code to retrieve the information from the KeyValueMap:
var response =
httpClient.send('https://api.enterprise.apigee.com/v1/o/org_name/e/env_name/
keyvaluemaps/kvm_name')
If the management server is unavailable, then the JavaScript code invoking the
management API call fails. This subsequently causes the API request to fail.
Impact
● Introduces additional dependency on Management Servers during runtime.
Any failure in Management servers will affect the API calls.
● User credentials for management APIs need to be stored either locally or in
some secure store such as Encrypted KVM.
● Performance implications owing to invoking the management service over
the network.
● May not see the updated values immediately due to longer cache expiration
in management servers.
Best practice
There are more effective ways of retrieving information from entities such as
KeyValueMaps, API Products, DeveloperApps, Developers, Consumer Keys, etc. at
runtime. Here are a few examples:
Further reading
● KeyValueMapOperations policy
● VerifyAPIKey policy
● AccessEntity policy
Antipattern
Invoking one API Proxy from another either using HTTPTargetConnection in the
target endpoint or custom JavaScript code results in additional network hop.
The next code sample invokes Proxy 2 from Proxy 1 using JavaScript:
Code Flow
In the code samples above, invoking Proxy 2 from Proxy 1 means that the request
has to be routed through the traditional route (i.e Router > MP) at runtime. This
would be akin to invoking an API from a client thereby making multiple network
hops that add to the latency. These hops are unnecessary considering that Proxy 1
request has already "reached" the MP.
Impact
Invoking one API Proxy from another API Proxy incurs unnecessary network hops,
that is the request has to be passed on from one Message Processor to another
Message Processor.
Best practice
● Use the proxy chaining feature for invoking one API Proxy from another.
Proxy chaining is more efficient as it uses local connection to reference the
target endpoint (another API Proxy).
The code sample shows proxy chaining using LocalTargetConnection in your
endpoint definition:
● <!-- /antipatterns/examples/2-3.xml -->
● <LocalTargetConnection>
● <APIProxy>proxy2</APIProxy>
<ProxyEndpoint>default</ProxyEndpoint>
</LocalTargetConnection>
● The invoked API Proxy gets executed within the same Message Processor;
as a result, it avoids the network hop as shown in the following figure:
● Figure 2: Code
Flow with Proxy Chaining
Further reading
● Proxy chaining
● API proxies
● Shared flows
● API products
● Caches
● KVMs
● Keystores and truststores
● Virtual hosts
● Target servers
● Resource files
While these resources do have restricted access, if any modifications are made to
them even by the authorized users, then the historic data simply gets overwritten
with the new data. This is due to the fact that these resources are stored in Apigee
Edge only as per their current state. The main exceptions to this rule are API
proxies and shared flows.
API proxies and shared flows are managed -- in other words, created, updated and
deployed -- through revisions. Revisions are sequentially numbered, which enables
you to add new changes and save it as a new revision or revert a change by
deploying a previous revision of the API proxy/shared flow. At any point in time,
there can be only one revision of an API proxy/shared flow deployed in an
environment unless the revisions have a different base path.
Although the API proxies and shared flows are managed through revisions, if any
modifications are made to an existing revision, there is no way to roll back since
the old changes are simply overwritten.
Apigee Edge provides the Audits and API, Product, and organization history
features that can be helpful in troubleshooting scenarios. These features enable
you to view information like who performed specific operations (create, read,
update, delete, deploy, and undeploy) and when the operations were performed on
the Edge resources. However, if any update or delete operations are performed on
any of the Edge resources, the audits cannot provide you the older data.
Antipattern
Managing the Edge resources (listed above) directly through Edge UI or
management APIs without using source control system
There's a misconception that Apigee Edge will be able to restore resources to their
previous state following modifications or deletes. However, Edge Cloud does not
provide restoration of resources to their previous state. Therefore, it is the user's
responsibility to ensure that all the data related to Edge resources is managed
through source control management, so that old data can be restored back quickly
in case of accidental deletion or situations where any change needs to be rolled
back. This is particularly important for production environments where this data is
required for runtime traffic.
Let's explain this with the help of a few examples and the kind of impact that can be
caused if the data in not managed through a source control system and is
modified/deleted knowingly or unknowingly:
A certificate on a virtual host is expiring and that virtual host needs updating.
Identifying which API proxies use that virtual host for testing purposes may be
difficult if there are many API proxies. If the API proxies are managed in an SCM
system outside Apigee, then it would be easy to search the repository.
Impact
● If any of the Edge resources are deleted, then it's not possible to recover the
resource and its contents from Apigee Edge.
● API requests may fail with unexpected errors leading to outage until the
resource is restored back to its previous state.
● It is difficult to search for inter-dependencies between API proxies and other
resources in Apigee Edge.
Best Practice
● Use any standard SCM coupled with a continuous integration and continuous
deployment (CICD) pipeline for managing API proxies and shared flows.
● Use any standard SCM for managing the other Edge resources, including API
products, caches, KVMs, target servers, virtual hosts, and keystores.
● If there are any existing Edge resources, then use management APIs
to get the configuration details for them as a JSON/XML payload and
store them in source control management.
● Manage any new updates to these resources in source control
management.
● If there's a need to create new Edge resources or update existing Edge
resources, then use the appropriate JSON/XML payload stored in
source control management and update the configuration in Edge
using management APIs.
* Encrypted KVMs cannot be exported in plain text from the API. It is the user's
responsibility to keep a record of what values are put into encrypted KVMs.
Further reading
● Source Control for API Proxy Development
● Guide to implementing CI on Apigee Edge
● Maven Deploy Plugin for API Proxies
● Maven Config Plugin to manage Edge Resources
● Apigee Edge API (for automating backups)
A virtual host lets you host multiple domain names on a single server or group of
servers. For Edge, the servers correspond to Edge Routers. By defining virtual hosts
on a Router, you can handle requests to multiple domains.
A virtual host on Edge defines a protocol (HTTP or HTTPS), along with a Router
port and a host alias. The host alias is typically a DNS domain name that maps to a
Router's IP address.
For example, the following image shows a Router with two virtual host definitions:
In this example, there are two virtual host definitions. One handles HTTPS requests
on the domain domainName1, the other handles HTTP requests on domainName2.
On a request to an API proxy, the Router compares the Host header and port
number of the incoming request to the list of host aliases defined by all virtual
hosts to determine which virtual host handles the request.
Antipattern
Defining multiple virtual hosts with the same host alias and port number in the
same/different environments of an organization or across organizations will lead to
confusion at the time of routing API requests and may cause unexpected
errors/behaviour.
Let's use an example to explain the implications of having multiple virtual hosts
with same host alias.
Consider there are two virtual hosts sandbox and secure defined with the same
host alias i.e., api.company.abc.com in an environment:
With the above setup, there could be two scenarios as described in the following
sections.
In this scenario, when the client applications make the calls to specific API Proxy
using the host alias api.company.abc.com, they will intermittently get 404 errors
with the message:
This is because the Router sends the requests to both sandboxand secure virtual
hosts. When the requests are routed to sandbox virtual host, the client
applications will get a successful response. However, when the requests are routed
to secure virtual host, the client applications will get 404 error as the API Proxy is
not configured to accept requests on secure virtual host.
In this scenario, when the client applications make the calls to specific API Proxy
using the host alias api.company.abc.com, they will get a valid response based
on the proxy logic.
However, this causes incorrect data to be stored in Analytics as the API requests
are routed to both the virtual hosts, while the actual requests were aimed to be sent
to only one virtual host.
This can also affect the logging information and any other data that is based on the
virtual hosts.
Impact
1. 404 Errors as the API requests may get routed to a virtual host to which the
API Proxy may not be configured to accept the requests.
2. Incorrect Analytics data since the API requests get routed to all the virtual
hosts having the same host alias while the requests were made only for a
specific virtual host.
Best Practice
● Don't define multiple virtual hosts with same host alias and port number in
the same environment, or different environments of an organization.
● If there's a need to define multiple virtual hosts, then use different host
aliases in each of the virtual hosts as shown below:
Further reading
● Virtual hosts
Antipattern: Load Balance with a single
target server with MaxFailures set to a
non-zero value
The TargetEndpoint configuration defines the way Apigee Edge connects to a
backend service or API. It sends the requests and receives the responses to/from
the backend service. The backend service can be a HTTP/HTTPS server, NodeJS,
or Hosted Target.
The backend service in the TargetEndpoint can be invoked in one of the following
ways:
Likewise, the Service Callout policy can be used to make a call to any external
service from the API Proxy flow. This policy supports defining HTTP/HTTPS target
URLs either directly in the policy itself or using a TargetServer configuration.
TargetServer configuration
TargetServer configuration decouples the concrete endpoint URLs from
TargetEndpoint configurations or in Service Callout policies. A TargetServer is
referenced by a name instead of the URL in TargetEndpoint. The TargetServer
configuration will have the hostname of the backend service, port number, and
other details.
<TargetServer name="target1">
<Host>www.mybackendservice.com</Host>
<Port>80</Port>
<IsEnabled>true</IsEnabled>
</TargetServer>
<TargetEndpoint name="default">
<HTTPTargetConnection>>
<LoadBalancer>
<Server name="target1"/>
<Server name="target2"/>
</LoadBalancer>
</HTTPTargetConnection>
</TargetEndpoint>
MaxFailures
The MaxFailures configuration specifies the maximum number of request
failures to the target server after which the target server shall be marked as down
and removed from rotation for all subsequent requests.
<TargetEndpoint name="default">
<HTTPTargetConnection>
<LoadBalancer>
<Server name="target1"/>
<Server name="target2"/>
<MaxFailures>5</MaxFailures>
</LoadBalancer>
</HTTPTargetConnection>
</TargetEndpoint>
In the above example, if five consecutive requests failed for "target1" then "target1"
will be removed from rotation and all subsequent requests will be sent only to
target2.
Antipattern
Having single TargetServer in a LoadBalancer configuration of the
TargetEndpoint or Service Callout policy with MaxFailures set to a non-zero
value is not recommended as it can have adverse implications.
Consider the following sample configuration that has a single TargetServer named
"target1" with MaxFailures set to 5 (non-zero value):
<TargetEndpoint name="default">
<HTTPTargetConnection>
<LoadBalancer>
<Algorithm>RoundRobin</Algorithm>
<Server name="target1" />
<MaxFailures>5</MaxFailures>
</LoadBalancer>
</HTTPTargetConnection>
If the requests to the TargetServer "target1" fails five times (number specified in
MaxFailures), the TargetServer is removed from rotation. Since there are no
other TargetServers to fail over to, all the subsequent requests to the API Proxy
having this configuration will fail with 503 Service Unavailable error.
Even if the TargetServer "target1" gets back to its normal state and is capable of
sending successful responses, the requests to the API Proxy will continue to return
503 errors. This is because Edge does not automatically put the TargetServer back
in rotation even after the target is up and running again. To address this issue, the
API Proxy must be redeployed for Edge to put the TargetServer back into rotation.
If the same configuration is used in the Service Callout policy, then the API
requests will get 500 Error after the requests to the TargetServer "target1" fails 5
times.
Impact
Using a single TargetServer in a LoadBalancer configuration of TargetEndpoint or
Service Callout policy with MaxFailures set to a non-zero value causes:
● API Requests to fail with 503/500 Errors continuously (after the requests fail
for MaxFailures number of times) until the API Proxy is redeployed.
● Longer outage as it is tricky and can take more time to diagnose the cause of
this issue (without prior knowledge about this antipattern).
Best Practice
1. Have more than one TargetServer in the LoadBalancer configuration for
higher availability.
2. Always define a Health Monitor when MaxFailures is set to a non-zero
value. A target server will be removed from rotation when the number of
failures reaches the number specified in MaxFailures. Having a
HealthMonitor ensures that the TargetServer is put back into rotation as
soon as the target server becomes available again, meaning there is no need
to redeploy the proxy.
To ensure that the health check is performed on the same port number that
Edge uses to connect to the target servers, Apigee recommends that you
omit the <Port> child element under<TCPMonitor> unless it is different
from TargetServer port. By default <Port> is the same as the TargetServer
port.
Sample configuration with HealthMonitor:
3. <TargetEndpoint name="default">
4. <HTTPTargetConnection>
5. <LoadBalancer>
<Algorithm>RoundRobin</Algorithm>
<Server name="target1" />
<Server name="target2" />
<MaxFailures>5</MaxFailures>
</LoadBalancer>
<Path>/test</Path>
<HealthMonitor>
<IsEnabled>true</IsEnabled>
<IntervalInSec>5</IntervalInSec>
<TCPMonitor>
<ConnectTimeoutInSec>10</ConnectTimeoutInSec>
</TCPMonitor>
</HealthMonitor>
</HTTPTargetConnection>
</TargetEndpoint>
6. If there's some constraint such that only one TargetServer and if the
HealthMonitor is not used, then don't specify MaxFailures in the
LoadBalancer configuration.
The default value of MaxFailures is 0. This means that Edge always tries to
connect to the target for each request and never removes the target server
from the rotation.
Further reading
● Load Balancing across Backend Servers
● How to use Target Servers in your API Proxies
Antipattern
Accessing the request/response payload with streaming enabled causes Edge to go
back to the default buffering mode.
The illustration above shows that we are trying to extract variables from the
request payload and converting the JSON response payload to XML using
JSONToXML policy. This will disable the streaming in Edge.
Impact
● Streaming will be disabled which can lead to increased latencies in
processing the data
● Increase in the heap memory usage or OutOfMemory Errors can be observed
on Message Processors due to use of in-memory buffers especially if we
have large request/response payloads
Best practice
● Don't access the request/response payload when streaming is enabled.
Further reading
● Streaming requests and responses
● How does APIGEE Edge Streaming work?
● How to handle streaming data together with normal request/response
payload in a single API Proxy
● Best practices for API proxy design and development
Antipattern
An API proxy can have one or more proxy endpoints. Defining multiple
ProxyEndpoints is an easy and simple mechanism to implement multiple APIs in a
single proxy. This lets you reuse policies and/or business logic before and after the
invocation of a TargetEndpoint.
On the other hand, when defining multiple ProxyEndpoints in a single API proxy,
you end up conceptually combining many unrelated APIs into a single artifact. It
makes the API proxies harder to read, understand, debug, and maintain. This
defeats the main philosophy of API proxies: making it easy for developers to create
and maintain APIs.
Impact
Multiple ProxyEndpoints in an API proxy can:
● Make it hard for developers to understand and maintain the API proxy.
● Obfuscate analytics. By default, analytics data is aggregated at the proxy
level. There is no breakdown of metrics by proxy endpoint unless you create
custom reports.
● Make it difficult to troubleshoot problems with API proxies.
Best practice
When you are implementing a new API proxy or redesigning an existing API proxy,
use the following best practices:
Further reading
● API Proxy Configuration Reference
● Reusable shared flows
Any API request that is routed via the Edge platform traverses a typical path before
it hits the backend:
● The request originates from a client which could be anything from a browser
to an app.
● The request is then received by the Edge gateway.
● It is processed within the gateway. As a part of this processing, the request
passes onto a number of distributed components.
● The gateway then routes the request to the backend that responds to the
request.
● The response from the backend then traverses back the exact reverse path
via the Edge gateway back to the client.
In effect, the performance of API requests routed via Edge is dependent on both
Edge and the backend systems. In this antipattern, we will focus on the impact on
API requests due to badly performing backend systems.
Antipattern
Let us consider the case of a problematic backend. These are the possibilities:
The challenge in exposing the services on these backend systems via APIs is that
they are accessible to a large number of end users. From a business perspective,
this is a desirable challenge, but something that needs to be dealt with.
Many times backend systems are not prepared for this extra demand on their
services and are consequently under sized or are not tuned for efficient response.
The problem with an "inadequately sized" backend is that if there is a spike in API
requests, then it will stress the resources like CPU, Load and Memory on the
backend systems. This would eventually cause API requests to fail.
Slow backend
The problem with an improperly tuned backend is that it would be very slow to
respond to any requests coming to it, thereby leading to increased latencies,
premature timeouts and a compromised customer experience.
The Edge platform offers a few tunable options to circumvent and manage the slow
backend. But these options have limitations.
Impact
● In the case of an inadequately sized backend, increase in traffic could lead to
failed requests.
● In the case of a slow backend, the latency of requests will increase.
Best practice
● Use caching to store the responses to improve the API response times and
reduce the load on the backend server.
● Resolve the underlying problem in the slow backend servers.
Further reading
● Edge caching internals
An API proxy is an interface for client applications used to connect with backend
services. Apigee Edge provides multiple ways to connect to backend services
through an API proxy:
Persistent Connections
Apigee Edge uses persistent connection for communicating with backend services.
A connection stays alive for 60 seconds by default. That is, if a connection is idle in
the connection pool for more than 60 seconds, then the connection closes.
Antipattern
Disabling persistent (keep alive) connections by setting the property
keepalive.timeout.millis to 0 in the TargetEndpoint configuration of a
specific API Proxy or setting the HTTPTransport.keepalive.timeout.millis
to 0 on Message Processors is not recommended as it will impact performance.
If keep alive connections are disabled for one or more backend services, then Edge
must open a new connection for each new request to the target backend service(s).
If the backend is HTTPs, Edge will also perform SSL handshake for each new
request, adding to the overall latency of API requests.
Impact
● Increases the overall response time of API requests as Apigee Edge must
open a new connection and perform SSL handshake for every new request.
● Connections may be exhausted under high traffic conditions, as it takes
some time to release connections back to the system.
Best Practice
● Backend services should honor and handle HTTP persistent connection in
accordance with HTTP 1.1 standards.
● Backend services should respond with a Connection:keep-alive header
if they are able to handle persistent (keep alive) connections.
● Backend services should respond with a Connection:close header if they
are unable to handle persistent connections.
Implementing this pattern will ensure that Apigee Edge can automatically handle
persistent or non-persistent connection with backend services, without requiring
changes to the API proxy.
Further reading
● HTTP persistent connections
● Keep-Alive
● Endpoint Properties Reference
Edge API Analytics is a very powerful built-in feature provided by Apigee Edge. It
collects and analyzes a broad spectrum of data that flows across APIs. The
analytics data captured can provide very useful insights. For instance, How is the
API traffic volume trending over a period of time? Which is the most used API?
Which APIs are having high error rates?
Regular analysis of this data and insights can be used to take appropriate actions
such as future capacity planning of APIs based on the current usage, business and
future investment decisions, and many more.
All these data are stored in an analytics schema created and managed within a
Postgres database by Apigee Edge.
The schema named analytics is used by Edge for storing all the analytics data
for each organization and environment. If monetization is installed there will be a
rkms schema. Other schemata are meant for Postgres internals.
The analytics schema will keep changing as Apigee Edge will dynamically add
new fact tables to it at runtime. The Postgres server component will aggregate the
fact data into aggregate tables which get loaded and displayed on the Edge UI.
Antipattern
Adding custom columns, tables, and/or views to any of the Apigee owned schemata
in Postgres Database on Private Cloud environments directly using SQL queries is
not advisable, as it can have adverse implications.
Consider a custom table named account has been created under analytics schema
as shown below:
After a while, let's say there's a need to upgrade Apigee Edge from a lower version
to a higher version. Upgrading Private Cloud Apigee Edge involves upgrading
Postgres amongst many other components. If there are any custom columns,
tables, or views added to the Postgres Database, then Postgres upgrade fails with
errors referencing the custom objects as they are not created by Apigee Edge.
Thus, the Apigee Edge upgrade also fails and cannot be completed.
Similarly errors can occur during Apigee Edge maintenance activities where in
backup and restore of Edge components including Postgres database are
performed.
Impact
● Apigee Edge upgrade cannot be completed because the Postgres
component upgrade fails with errors referencing to custom objects not
created by Apigee Edge.
● Inconsistencies (and failures) while performing Apigee Analytics service
maintenance (backup/restore).
Best Practice
● Don't add any custom information in the form of columns, tables, views,
functions, and procedures directly to any of the Apigee owned schema such
as analytics, and so on
● If there's a need to support custom information, it can be added as columns
(fields) using a Statistics Collector policy to analytics schema.
Further reading
● API Analytics overview
● Stats API
Introduction to playbooks
The act of troubleshooting is both an art and a science. The constant effort of
Apigee technical support teams has been to demystify the art and expose the
science behind problem identification and resolution.
To find troubleshooting playbooks, you can try searching for specific error
messages using the Search box at the top of this page, or use the TOC on the left to
navigate the playbook library.
Audience
Troubleshooting playbooks are intended for readers with a high-level
understanding of Apigee Edge and its architecture, as well as some understanding
of basic Edge concepts such as policies, analytics, and monetization.
Some problems can be diagnosed and solved only by Apigee Private Cloud users
and may require knowledge of internal Edge components such such as Cassandra
and Postgres datastores, Zookeeper, and the Edge Router.
If you are on Apigee Public Cloud, then we clearly specify when you can perform
the indicated troubleshooting steps and when you need to call Apigee Edge Support
for assistance.
Problem categories
Use this page to navigate to playbooks for specific areas where you may encounter
problems in Apigee Edge.
Problem origin Description Videos
Note: Refer to the Diagnostic information section if the problem persists, even after following
the instructions provided in the playbook, or if you couldn't find the playbook for a specific
problem or error.
Prerequisites
On the machines where the Apigee Edge components are installed, ensure that you
have the following:
● Enough disk space in the /tmp directory to capture and store the diagnostic
data
● Required permissions to login and gather the diagnostic data
Guides
The following table describes the current Private Cloud diagnostic guides:
Analytics problems
Note: This topic applies to Edge Private Cloud only.
These topics explain how to troubleshoot issues with Analytics such as data not
showing up in the Analytics dashboard, issues with custom reports, Postgres disk
running out of space, and so on.
Playbooks
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving the common Analytics problems.
or
Custom
Dimensions not
appearing when
multiple
axgroups have
been configured
Diagnostic information
If you need assistance from Apigee Support on the Analytics problems, then gather
the following diagnostic information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
Note: Attach all of the above files and command outputs to the support case.
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving Postgres running out of disk
space.
Playbooks
Diagnostic information
If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
Note: Attach all of the above files and command outputs to the support case.
Cassandra problems
Note: This topic applies to Edge Private Cloud only.
Diagnostic information
This section explains the diagnostic information that needs to be gathered for
Cassandra problems on the Private Cloud setup.
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
Cassandr Cassandr cp
a a /opt/apigee/apigee-cassandra/conf/cassandra-
configura topology.properties
tion files /tmp/cassandra_topology_$(hostname)-$(date
+%Y.%m.%d_%H.%M.%S).properties
cp
/opt/apigee/apigee-cassandra/conf/cassandra-
env.sh /tmp/cassandra_envsh_$(hostname)-$
(date +%Y.%m.%d_%H.%M.%S).sh
cp
/opt/apigee/apigee-cassandra/conf/cassandra.
yaml /tmp/cassandra_conf_$(hostname)-$(date
+%Y.%m.%d_%H.%M.%S).yaml
Deployment errors
Note: This topic applies to Edge Private Cloud only.
Any error that occurs during deployment of an API Proxy is called a Deployment
error. Deployment of API proxies might fail due to various reasons such as network
connectivity issues between Edge servers, issues with the Cassandra datastore,
ZooKeeper exceptions, and errors in the API proxy bundle.
Playbooks
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving deployment errors.
Error message Playbook
Diagnostic information
If you need assistance from Apigee Edge Support on the deployment error, then
gather the following diagnostic information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
Runtime issues
Note: This topic applies to Edge Private Cloud only.
Any errors, latency issues, or unexpected results observed during the execution of
your API requests are referred to as runtime issues.
4XX/5XX errors
Playbook
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving runtime 4XX and 5XX errors.
Error
response/messa Error code Playbook
ge
Diagnostics information
If you need any assistance from Apigee Edge Support on 4XX runtime errors (such
as 400, 401, 404, and 499) or 5XX (such as 500, 503, and 504) errors, then gather
and share the following diagnostic logs and information in the support case:
Diagnostic information
If you need any assistance from Apigee Edge Support on the 400 Bad Request -
SSL Certificate Error, then gather the following diagnostic information and
share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 404 Unable to identify
proxy for host error.
"errorcode":"messaging.adaptors.http
.flow.ApplicationNotFound"
}
}
}
Diagnostic information
If you need assistance from Apigee Edge Support on the 404 Unable to
identify proxy for host error, then gather the following diagnostic
information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
curl -s
0:8082/v1/runtime/organizations/ORGNAME/en
vironments/ENVNAME/apis/APINAME/revisions
> /tmp/rmp_api_APINAME_revisions_$
(hostname)_$(date +%Y.%m.%d_%H.%M.%S).txt
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 502 Bad Gateway - no live
upstreams while connecting to upstream.
Problem Error message in logs Playbook
HTTP/1.1 502 Bad You will see the following 502 Bad
Gateway error in NGINX error logs: Gateway
(/opt/apigee/var/log/
<html>
<head> edge-router/nginx/
<title>Error</title> ORGNAME~ENVNAME._error
<style> _log)
body {
width: 35em; [error] 4796#4796:
margin: 0 auto; *56357443 no live
font-family: Tahoma,
upstreams while
Verdana, Arial, sans-serif;
connecting to
}
</style> upstream, client:
</head> ROUTER_IP_ADDRESS,
<body> server: HOST_ALIAS,
<h1>An error request: "PUT
occurred.</h1> BASE_PATH HTTP/1.1",
<p>Sorry, the page you upstream:
are looking for is
"http://LISTOFMP_IP_R_
currently
MP_PORT/BASE_PATH",
unavailable.<br/>
Please try again host: "HOST_ALIAS"
later.</p>
</body>
</html>
Diagnostic information
If you need assistance from Apigee Edge Support on the 502 Bad Gateway - no
live streams while connecting to upstream, then gather the following
diagnostic information and share it in the support case:
Where
Diagnosti
can I
c
gather How do I gather this information?
informati
this
on
informati
on?
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving 502 Bad Gateway -
Unexpected EOF At Target:
Diagnostic information
If you need assistance from Apigee Edge Support on the 502 Bad Gateway -
Unexpected EOF At Target, then gather the following diagnostic information
and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving TLS/SSL handshake failures:
Diagnostic information
If you need assistance from Apigee Edge Support on the TLS/SSL handshake
failures, then gather the following diagnostic information and share it in the support
case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
These topics explain how to troubleshoot issues encountered while starting Apigee
Edge components such as Router, Message Processor, Management Server, and
ZooKeeper.
Error message
apigee-service: edge-router: Not running (DEAD)
apigee-all: Error: status failed on [edge-router]
Diagnostic information
If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:
Diagnosti Where
How do I gather this information?
c can I
gather
informati this
on informati
on?
Error message
apigee-service: edge-message-processor: Not running (DEAD)
apigee-all: Error: status failed on [edge-message-processor]
Diagnostic information
If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
Error message
apigee-service: edge-management-server: Not running (DEAD)
apigee-all: Error: status failed on [edge-management-server]
Diagnostic information
If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving issues with starting ZooKeeper
servers.
If you need assistance from Apigee Edge Support on this issue, then gather the
following diagnostic information and share it in the support case:
Where
Diagnosti can I
c gather
How do I gather this information?
informati this
on informati
on?
ZooKeeper problems
Note: This topic applies to Edge Private Cloud only.
Playbooks
This section provides information and guidance on some specific procedures that
can be followed for troubleshooting and resolving ZooKeeper connection loss
errors.
at
org.apache.zookeeper.KeeperException
.create(KeeperException.java:99)
~[zookeeper-3.4.6.jar:3.4.6-1569965]
or
Diagnostic information
If you need assistance from Apigee Support on the ZooKeeper connection loss
errors, then gather the following diagnostic information and share it in the support
case:
Note: Some types of problems can only be resolved if you are an Edge Private Cloud user.
Whenever possible, we identify which troubleshooting procedures can be performed by Private
Cloud users, Public Cloud users, or both.
Analytics problems
These topics explain how to troubleshoot cases where analytics data does not
show up in the Analytics dashboards or in custom reports.
You may not observe any error Postgres server running out
message unless the disk space is of disk space
completely filled on the Postgres
Server.
Deployment errors
Deployment of API proxies might fail due to various reasons such as network
connectivity issues between Edge servers, issues with the Cassandra datastore,
ZooKeeper exceptions, and errors in the API proxy bundle. This section provides
information and guidance on some specific procedures that can be followed for
troubleshooting deployment errors.
Monetization problems
The following topics will help you troubleshoot and fix common Monetization
problems.
<error> Developer
<messages> Suspended
<message>Exceeded developer limit configuration
-</message>
<message>Is Developer Suspended - true</message>
</messages>
</error>
You may not observe any error messages, but you will see Monetization
issues as explained in the Symptom section in Monetization setup issues
setup issues.
OpenLDAP problems
The following topics will help you troubleshoot and fix common OpenLDAP
problems.
Runtime errors
The following topics will help you troubleshoot and fix common runtime problems.
Zookeeper problems
The following topics will help you troubleshoot and fix common Zookeeper
problems.
OR
org.apache.zookeeper.KeeperException$C
onnectionLossException:
KeeperErrorCode = ConnectionLoss
OR
The Edge UI may display this error:
Symptom
Analytics data is missing in the Edge UI due to the Qpidd server not transferring
analytics messages to PostgreSQL. In Edge, the edge-qpid-server component
corresponds to the Qpidd server.
● ax-q-axgroup001-consumer-group-001
This queue holds analytics messages pushed from the Message Processors
and Routers. Messages get pulled from here by the edge-qpid-server
which parses messages and inserts them into PostgreSQL. Once messages
are processed successfully they are removed from the queue.
● ax-q-axgroup001-consumer-group-001-dl
This queue is the dead letter queue. It acts as the destination for messages
that edge-qpid-server failed to process and thus no longer wishes to
receive. This typically gets populated when the maximum delivery count is
exceeded, or if PostgreSQL rejected insertion of new data due to runtime
errors.
Error Message
The root cause could be due to various runtime errors from the edge-qpid-
server component. Typically if edge-qpid-server receives a runtime error
from PostgreSQL, it creates the dead letter queue if it doesn't exist already, and
then sends the following message there:
Possible Causes
Troubleshooting
Cause Description Instructions
Applicable For
qpid-stat -q
The output returns the set of queues registered with the broker. If the queue with
name ending in "-dl" has messages populated, then there are messages stuck on
the dead letter queue.
Queues
ax-q-axgroup-001-consumer-group-001-dl Y 0 70 70 0
3.9m 3.9m 0 2
1. An upgrade had taken place in the past, during which time PostgreSQL was
down.
2. A temporary outage of PostgreSQL due to network issues.
3. The edge-qpid-server attempted to send a message to PostgreSQL but
PostgreSQL returned a runtime error.
Resolution
1. Note down the name of the queues from the Common Diagnosis Steps. For
example:
● ax-q-axgroup-001-consumer-group-001
● ax-q-axgroup-001-consumer-group-001-dl
2. Run the qpid-tool command to enter an interactive qpidprompt:
3. qpid-tool
4. This comamnd returns the following:
5. Management Tool for QPID
6. qpid:
7. Run list broker to obtain a list of active brokers:
8. list broker
9. This comamnd returns the following:
10.Object Summary:
11.ID Created Destroyed Index
12.=======================================
125 21:00:00 - amqp-broker
13.Where the ID column specifies the ID of the borker.
14.Note the ID of the broker. In the example it is 125.
15.Run the following command to move the messages from the dead letter
queue back to the actual queue:
16.call 125 queueMoveMessages ax-q-axgroup-001-consumer-group-001-dl
ax-q-axgroup-001-consumer-group-001 100000 {}
17.This comamnd returns the following:
18.OK (0) - {}
19.If there is no output then there was nothing to do, meaining no messages to
move. If you do not see OK(0) then you should contact Apigee Edge
Support.
20.Quit the qpid-tool terminal.
21.quit
22.Wait 5 minutes and then run the diagnosis steps again from Common
Diagnosis Steps. Verify that the messages on the actual queue are getting
processed and ensure the dead letter msg count remains at 0.
If the problem persists even after following the above instructions, please gather
the following diagnostic information. Contact and share them with Apigee Edge
Support:
Symptom
The Analytics dashboards (Proxy Performance, Target Performance, Custom
Reports, etc.) in the Edge UI timeout.
Error messages
You see the following error message when the Analytics dashboards timeout:
The report timed out: Try again with a smaller date range or a larger aggregation
interval.
Possible causes
The following table lists possible causes of this issue:
Cause For
If any of the Edge components are under capacity (if they have less CPU, RAM, or
IOPS capacity than required), then the Postgres Servers/Qpid Servers may run
slowly causing Analytics dashboards to timeout.
Resolution
Ensure that all the Edge components adhere to the minimum hardware
requirements as described in Hardware Requirements.
Based on the output obtained in step #2 and #3, if you notice that either the
duration for which the data has been stored is long (longer than your retention
interval) and/or the table sizes are very large, then it indicates that you have large
amounts of analytics data in the Postgres database. This could be causing the
Analytics dashboards to time out.
Resolution
1. Determine the retention interval, that is the duration for which you want to
retain the Analytics data in the Postgres Database.
For example, you want to retain 60 days worth of Analytics data.
2. Run the following command to prune data for a specific organization and
environment:
3. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-
purge
4. org env num_days_to_purge_back_from_current_date
5. For more information, see Pruning Analytics data.
If the problem persists, then proceed to Insufficient time to fetch Analytics data.
Resolution
The Edge UI has a default timeout of 120 seconds for fetching and displaying the
Analytics data. If the volume of Analytics data to be fetched is very large, then 120
seconds may not be sufficient. Increase the Edge UI timeout value to 300 seconds
by following the instructions in Set the timeout used by the Edge UI for Edge API
management calls (on-premises customers only).
Reload any of the Analytics dashboard and check if you are able to view the data for
all the tabs - Hour, Day, Week and Custom.
Symptom
The custom variable created using the StatisticsCollector policy is not visible under
Custom Dimensions in the Analytics Custom Reports in the Edge UI.
Error messages
Possible causes
Cause For
The code snippet below shows that the variable name "product id" has a space, so it
will not appear under the Custom Dimension in the Custom Report.
<StatisticsCollector name="publishPurchaseDetails">
<Statistics>
<Statistic name="productID" ref="product id" type="string">999999</Statistic>
</Statistics>
</StatisticsCollector>
Resolution
Custom variable names used in the StatisticsCollector policy within the API proxy
should adhere to the following guidelines:
If the problem persists, then proceed to No traffic on API Proxy implementing the
StatisticsCollector policy.
If there is no traffic on the API proxy that implements the StatisticsCollector policy,
then the custom variable will not appear in the Custom Reports.
Resolution
Make some calls to the API proxy that implements the StatisticsCollector policy.
Wait for some time and check if the custom variable(s) appears in the custom
dimensions in Custom Report.
If the problem persists, then proceed to Custom Variable not pushed to Postgres
Server.
Diagnosis
When a custom variable is created in the API proxy and API calls are made, the
variable first gets stored in-memory on the Message Processor. The Message
Processor then sends the information about the new variable to ZooKeeper, which
in turn sends it to the Postgres server to add it as a column in the Postgres
database.
At times, the notification from ZooKeeper may not reach the Postgres server due to
network issues. Because of this error, the custom variable might not appear in the
Custom Report.
Resolution
Solution #1: Restart the Postgres Server
1. Create /opt/apigee/customer/application/postgres-
server.properties file on the Postgres server machine, if it does not
already exist.
2. Add the following line to this file:
3. conf_pg-agent_forceonboard=true
4. Ensure this file is owned by Apigee by using the following command:
5. chown apigee:apigee /opt/apigee/customer/application/postgres-
server.properties
6. Restart the Postgres server:
7. /opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart
8. If you have more than one Postgres server, repeat the steps above on all the
Postgres servers.
9. Undeploy and deploy your API proxy that uses the StatisticsCollector policy.
10.Run the API calls.
11.Check if the custom variable(s) appear in the custom dimensions in Custom
Report.
Symptom
The analytics dashboards (Proxy Performance, Target Performance, etc.) are not
showing any data in the Edge UI. All the dashboards show the following message:
Error messages
This issue does not result in observable errors.
Possible causes
The following table lists possible causes of this issue:
Cause For
Analytics Data not being pushed to Postgres Edge for Private Cloud
Database users
Resolution
1. Make some calls to one or more API proxies in the specific organization-
environment.
2. Wait for a few seconds, and then view the analytics dashboards in the Hour
tab and see if the data appears.
3. If the issue persists, then proceed to Data available in Postgres Database,
but not displaying in UI.
To check if the latest Analytics data is available in the Postgres Master node:
1. Log in to each of the Postgres servers and run the following command to
validate if you are on the Master Postgres node:
2. /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-
check-master
3. On the Master Postgres node, log in to PostgresSQL:
4. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
5. Check if the table exists for your org-env using the following SQL query in
the Postgres database:
6. \d analytics."orgname.envname.fact"
7. NOTE: Replace orgname and envname with your organization and environment names.
8. Check if the latest data is available in the Postgres database using the
following SQL query:
9. select max(client_received_start_timestamp) from
analytics."orgname.envname.fact";
10.If the latest timestamp is very old (or null), then this indicates that data is not
available in the Postgres database. The likely cause for this issue would be
that the data is not pushed from the Qpid Server to the Postgres database.
Proceed to Analytics Data not being pushed to Postgres Database.
11.If the latest data is available in the Postgres database on the Master node,
then follow the below steps to diagnose why the data is not displaying in the
Edge UI.
Diagnosis
1. Enable Developer Tools in your Chrome browser and get the API used from
one of the analytics dashboards using the below steps:
a. Select the Network tab from the Developer Tools.
b. Start Recording.
c. Reload the Analytics dashboard.
d. On the left hand panel in the Developer Tools, select the row having
"apiproxy?_optimized...".
e. On the right hand panel in the Developer Tools, select the "Headers"
tab and note the "Request URL".
2. Here's sample output from the Developer Tools:
Sample output showing the API used in Proxy Performance dashboard from
Network Tab of Developer Tools for Proxy Performance dashboard
3. Run the management API call directly and check if you get the results. Here's
a sample API call for the Day tab in the Proxy Performance dashboard:
4. curl -u username:password
5. "http://management_server_IP_address:8080/v1/organizations/
6. org_name/environments/env_name/stats/apiproxy?limit=14400&
select=sum(message_count),sum(is_error),avg(total_response_time),
avg(target_response_time)&sort=DESC&sortby=sum(message_count),sum(i
s_error),
avg(total_response_time),avg(target_response_time)&timeRange=08%2F9%
2F2017+
18:00:00~08%2F10%2F2017+18:00:00&timeUnit=hour&tsAscending=true"
7. If you see a successful response but without any data, then it indicates that
the Management Server is unable to fetch the data from the Postgres server
due to network connectivity issues.
8. Check if you are able to connect to the Postgres server from the
Management Server:
9. telnet Postgres_server_IP_address 5432
10.If you are unable to connect to the Postgres server, check if there are any
firewall restrictions on port 5432.
11.If there are firewall restrictions, then that could be the cause for the
Management Server being unable to pull the data from the Postgres server.
Resolution
1. If there are firewall restrictions, then remove them so that the Management
Server can communicate with the Postgres server.
2. If there are no firewall restrictions, then this issue could be due to network
glitch.
3. If there was any network glitch on the Management Server, then restarting it
might fix the issue.
4. Restart all the Management Servers one by one using the below command:
5. /opt/apigee/apigee-service/bin/apigee-service edge-management-server
restart
6. Check if you are able to see the analytics data in the Edge UI.
If you still don't see the data, contact Apigee Edge Support.
If the data is not pushed from Qpid Server to Postgres Database as determined in
Data available in Postgres Database, but not shown up in UI, then perform the
following steps:
1. Check if each of the Qpid Server is up and running by executing the below
command:
2. /opt/apigee/bin/apigee-service edge-qpid-server status
3. If any Qpid Server is down, then restart it. If not, jump to step #5.
4. /opt/apigee/bin/apigee-service edge-qpid-server restart
5. Wait for some time and then re-check if the latest data is available in
Postgres database.
a. Log in to PostgresSQL:
b. psql -h /opt/apigee/var/run/apigee-postgresql -U apigee apigee
c. Run the below SQL query to check if the latest data is available:
d. select max(client_received_start_timestamp) from
analytics."orgname.envname.fact";
6. If the latest data is available, then skip the following steps and proceed to
last step in Resolution section. If the latest data is not available, then
proceed with the following steps.
7. Check if the messages from Qpid server queues are being pushed to
Postgres database.
a. Execute the qpid-stat -q command and check the msgIn and
msgOut column values.
b. Here's a sample output that shows the msgIn and msgOut are not
equal. This indicates that messages are not being pushed from Qpid
Server to Postgres Database.
8. If there's a mismatch in msgIn and msgOut columns, then check the Qpid
Server logs /opt/apigee/var/log/edge-qpid-server/system.log
and see if there are any errors.
9. You may see error messages such as "Probably PG is still down" or "FATAL:
sorry, too many clients already" as shown in the figure below:
10.2017-07-28 09:56:39,896 ax-q-axgroup001-persistpool-thread-3
11. WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Found the
exception to be
12. retriable - . Error observed while trying to connect to
jdbc:postgresql://PG_IP_address:5432/apigee Initial referenced UUID when
execution started in this thread was a1ddf72f-ac77-49c0-a1fc-
d0db6bf9991d
Probably PG is still down. PG set used - [a1ddf72f-ac77-49c0-a1fc-
d0db6bf9991d]
2017-07-28 09:56:39,896 ax-q-axgroup001-persistpool-thread-3
WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Could not get
JDBC Connection;
nested exception is org.postgresql.util.PSQLException: FATAL: sorry, too
many clients already
2017-07-28 09:56:53,617 pool-7-thread-1
WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Found the
exception to be
retriable - . Error observed while trying to connect to
jdbc:postgresql://PG_IP_address:5432/apigee
Initial referenced UUID when execution started in this thread was
a1ddf72f-ac77-49c0-a1fc-d0db6bf9991d Probably PG is still down. PG set
used -
[a1ddf72f-ac77-49c0-a1fc-d0db6bf9991d]
2017-07-28 09:56:53,617 pool-7-thread-1
WARN c.a.a.d.c.ServerHandle - ServerHandle.logRetry() : Could not get
JDBC Connection;
nested exception is org.apache.commons.dbcp.SQLNestedException:
Cannot create
PoolableConnectionFactory (FATAL: sorry, too many clients already)
This could happen if the Postgres Server is running too many SQL queries or the
CPU running high and hence unable to respond to Qpid Server.
Resolution
1. Restart the Postgres Server and PostgreSQL as shown below:
/opt/apigee/bin/apigee-service edge-postgres-server restart
2.
/opt/apigee/bin/apigee-service apigee-postgresql restart
3.
4. This restart ensures that all the previous SQL queries are stopped and
should allow new connections to the Postgres database.
5. Reload the Analytics dashboards and check if the Analytics data is being
displayed.
This issue can be typically resolved by restarting the servers which showed the
"FAILURE" or "UNKNOWN".
Resolution
To remove the stale UUIDs and add the correct UUIDs of the servers, contact
Apigee Edge Support.
Symptom
The Postgres Server containing the Analytics data has run out of disk space.
In the following example, you can see that the disk /u01 has filled up 90%
(176GB/207GB) of the disk space.
$df -g
Error messages
You may not observe any error message unless the disk space is completely filled
on the Postgres Server.
Possible causes
The following table lists possible causes of this issue:
Cause For
One of the typical causes for disk space errors on Postgres Servers is that you
don't have adequate disk space to store the large volumes of analytics data. The
steps provided below will help you to determine if you have enough disk space or
not and take appropriate action to address the issue.
Resolution
With the increase in API traffic to Edge, the amount of analytics data getting stored
in the Postgres database will also increase. The amount of analytics data that can
be stored in Postgres database is limited by the amount of disk space available on
the system.
Therefore, you cannot continue to keep storing additional analytics data on the
Postgres database without taking one of the following actions:
If you don't prune the data at regular intervals manually or by using a cron job, then
the amount of analytics data continually increases and can eventually lead to your
running out of disk space on the system.
Resolution
1. Determine the retention interval, that is the duration for which you want to
retain the Analytics data in the Postgres Database.
2. Run the following command to prune data for a specific organization and
environment:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql pg-data-purge
org env number_of_days_to_retain [Delete-from-parent-fact - N/Y] [Skip-
confirmation-prompt - N/Y]
3.
The script has the following options:
In an Edge for Private Cloud installation, you might have to remove Postgres and
Qpid servers from an existing analytics group, or add them to an analytics group.
This document describes how to add and remove Postgres and Qpid servers in an
existing Edge installation for a single Postgres installation and a master-standby
Postgres installation.
Prerequisites
Ability to make management server API calls using the system admin credentials.
18.Use the following API to add the Postgres server UUIDs to the consumer
group:
19.curl -v -u adminEmail:pword -X POST -H -H 'Content-Type: application/json'
"http://ms-IP:8080/v1/analytics/groups/ax/axgroupname/consumer-
groups/consumer-group-001/datastores?uuid=masteruuid,slaveuuid"
20.Example call and output:
21.curl -v -u adminEmail:pword -X POST -H 'Content-Type: application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consumer-
groups/consumer-group-001/datastores?uuid=8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77,731c8c43-8c35-4b58-ad1a-f572b69c5f0"
22.[ {
23. "name" : "axgroup-001",
24. "properties" : {
},
"scopes" : [ "example~prod", "example~test" ],
"uuids" : {
"qpid-server" : ["54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-
1b0ec0ec8d77:731c8c43-8c35-4b58-ad1a-f572b69c5f0"]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "54a96375-33a7-4fba-6bfa-80fda94f6e07" ],
"datastores" : ["8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77:731c8c43-
8c35-4b58-ad1a-f572b69c5f0"],
"properties" : {
}
} ],
"data-processors" : {
}
25.Restart all edge-postgres-server and edge-qpid-server components
on all nodes to make sure the change is picked up by those components:
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart
Error Message
The edge-postgres-server system log will have the following error after a restart for
each organization/environment combination where the path is missing:
Possible Causes
Troubleshooting
Cause Description Instructions
Applicable For
Typically this happens when an organization was on-boarded at a time when more
than one analytics group existed that used the same postgres-server UUIDs. To
check how many analytics groups exist, run the following API call:
If this API returns multiple groups, then it is most likely the cause of the issue. In
the following example, notice that two groups exist, differentiated by only a hyphen:
axgroup-001 and axgroup001
[{
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ "VALIDATE~test" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ],
"properties" : {
}
} ],
"data-processors" : {
}
}, {
"name" : "axgroup001",
"properties" : {
},
"scopes" : [ "example~prod" ],
"uuids" : {
"qpid-server" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"postgres-server" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "94c96375-1ca7-412d-9eee-80fda94f6e07" ],
"datastores" : [ "8ee86b70-5b33-44b6-b2f8-1b0ec0ec8d77" ],
"properties" : {
}
} ],
"data-processors" : {
}
Specifically, the issue is caused if the postgres-server UUID that is configured for
each axgroup is the same. A quick way to determine this is to run the following:
The same set of PostgreSQL servers cannot be assigned to two axgroups. The
quickest way to resolution is to delete the postgres server information from one
axgroup and assign all scopes (organization, environment combinations) to the
other group. To do this, follow these steps:
1. Remove the postgres server components from one of the two axgroups
using the instructions at Adding and deleting analytics components in
analytics groups. If you are seeing problems only with non-production
scopes, then choose the axgroup which does NOT have any production
scope for this exercise. Otherwise, choose the axgroup with the least
number of scopes.
2. Remove all scopes from the axgroup that no longer has any postgres
servers, and those for which you are seeing the custom dimension issue
from their axgroup. Assuming axgroup-001 from the example above is the
group you want to use, you would have to use the following to remove the
scope for the myorg organization, and the prod environment:
curl -u username:password -X DELETE 'http://management-server-
host:8080/v1/analytics/groups/ax/axgroup001/scopes?org=myorg&env=prod'
3.
4. Enable analtyics using the Enable analytics API.
5. Restart the edge-postgres-server component on the postgres master node,
and all edge-qpid-server components you may have installed, using the
following commands:
/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server restart
6.
/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart
7.
8. Verify that the error provided in the Error Message section above no longer
occurs in the Postgres server's log file /opt/apigee/var/log/edge-
postgres-server/logs/system.log.
9. Confirm if the custom dimensions show up in the Edge UI.
3.
4. The output of the ZooKeeper tree obtained using the following command:
5. /opt/apigee/apigee-zookeeper/contrib/zk-tree.sh > zktree-output.txt
"code" : "dataapi.service.PGFoundInMultipleGroups",
"message" : "dataapi.service.PGFoundInMultipleGroups",
"contexts" : [ ]
error:
Possible Causes
3.
Sample Output showing two
analytics groups
4. {
5. "name":"axgroup-001",
6. "properties":{
},
"scopes":[
],
"uuids":{
"qpid-server":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"postgres-server":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
]
},
"consumer-groups":[
{
"name":"consumer-group-001",
"consumers":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"datastores":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
],
"properties":{
}
}
],
"data-processors":{
}
},
{
"name":"axgroup001",
"properties":{
"consumer-type":"ax"
},
"scopes":[
"017pdspoint~dev",
"010test~dev",
"019charmo~dev",
"009gcisearch1~dev",
"000fj~trial-fjwan",
"009dekura~dev",
"008pisa~dev",
"004fjadrms~dev",
"018k5billing~dev",
"004study14~dev",
"001teama~dev",
"005specdb~dev",
"test~dev",
"000fj~prod-fjwan",
"012pjweb~dev",
"020workflow~dev",
"007ikou~prod-fjwan",
"003asano~dev",
"013mims~dev",
"006studyhas~dev",
"006efocus~dev",
"002wfproto~dev",
"008murahata~dev",
"016mediaapi~dev",
"015skillnet~dev",
"014aclmanager~dev",
"010fjppei~dev",
"000fj~trial",
"003esupport~dev",
"000fj~prod",
"005ooi~dev",
"test~elb1",
"007fjauth~dev",
"011osp~dev",
"002study~dev",
"999test~dev"
],
"uuids":{
"qpid-server":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"aries-datastore":[
],
"postgres-server":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
],
"dw-server":[
]
},
"consumer-groups":[
{
"name":"consumer-group-001",
"consumers":[
"5c1e9690-7b58-499a-a4bb-
d54454474b8f",
"7794c428-e553-4ed2-843d-
69f93bbec8a3"
],
"datastores":[
"3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-
4dfb-8c74-548010e95e5e"
],
"properties":{
}
}
],
"data-processors":{
}
}
7. This output shows that there are
two analytics groups axgroup-001
and axgroup001.
8. Check to make sure all of the
analytics groups have scopes
defined.
In the sample analytics groups
output shown above, the analytics
group axgroup-001 does not have
any scopes defined, but it still has
Postgres servers defined as
datastores.
9. Execute the below Qpid queue
stats command on the Qpid servers
and validate if there are no
messages coming for the specific
analytics group identified in step
#2.
10. qpid-stat -q
11. Sample Qpid queue stats
12. The following Qpid queue stats
indicate that there are no messages
coming for the specific analytics
group queue from the example cited
above (axgroup-001):
que d a e mm m b b b c b
u u u x s s s y y y o i
e r t c g g g t t t n n
o l I O e e e s d
D n u s s s
e t I O
l n u
t
140 Y Y 0 0 0 0 0 0 1 2
9
9
5f
e-
7
1
a
7-
4
0
0
0-
a
1f
4-
7
1
b
7
a
9
5
1
d
a
7f
:0
.0
ax- Y 0 0 0 0 0 0 1 2
q-
ax
gr
o
u
p-
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1
ax- Y 0 0 0 0 0 0 0 2
q-
ax
gr
o
u
p-
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1-
dl
ax- Y 0 2 2 0 2 2 1 2
q-
ax
gr
o
u
p
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1
ax- Y 3 3 0 5 5 0 0 2
q-
ax
gr
o
u
p
0
0
1-
co
ns
u
m
er
-
gr
o
u
p-
0
0
1-
dl
3.
Repeat the same API call above if
there are multiple consumers,
mentioning the UUID of each
consumer in a separate API call.
For the example shown above, the
following API removes the
consumer with UUID 5c1e9690-
7b58-499a-a4bb-
d54454474b8f:
4. curl -v -X DELETE -H 'Accept:application/json' -H
'Content-Type:application/json'
'http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consume
r-groups/consumer-group-001/consumers/5c1e9690-7b58-499a-a4bb-
d54454474b8f'
5.
{
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
* Connection #0 to host localhost left intact
* Closing connection #0
}
6.
Re-run the same API to delete the
other consumer whose UUID is
7794c428-e553-4ed2-843d-
69f93bbec8a3 in the present
example.
3.
Example:
The following API deletes the
consumer group name consumer-
group-001:
4. curl -v -X DELETE
'http://localhost:8080/v1/analytics/groups/ax/axgroup-001/consume
r-groups/consumer-group-001'
5. {
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
* Connection #0 to host localhost left intact
* Closing connection #0
}
axgroup
1. Use the following management
API to delete qpid-servers from
the particular axgroup.
2. curl -X DELETE -u admin@email.com
"http://localhost:8080/v1/analytics/groups/ax/{axgroup-name}/serv
ers?uuid={qpid-server-uuid}type=qpid-server" -H "Content-type:
application/json"
3.
Re-run the same API call if there
are multiple Qpid servers.
Example:
Use the following API to delete the
Qpid server with UUID 7794c428-
e553-4ed2-843d-69f93bbec8a3 in
the present example:
4. curl -X DELETE
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers
?uuid=7794c428-e553-4ed2-843d-69f93bbec8a3&type=qpid-server" -H
"Content-type: application/json"
5.
{
"name" : "axgroup-001",
"properties" : {
},
"scopes" : [ ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f" ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
}
3.
Use the following management API
to delete the Postgres servers if you
have master and Postgres slave
setup
4. curl -v -X DELETE -H 'Accept:application/json'
"http://{mgmt-server-host}:8080/v1/analytics/groups/ax/{axgroup-
name}/servers?uuid={postgres-master-uuid,postgres-slave-
uuid}&type=postgres-server&force=true"
5. Example:
6. In the example shown above,
there are master and slave Postgres
servers, so you can use the
following API to delete the Postgres
servers:
7. curl -v -X DELETE -H 'Accept:application/json'
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001/servers
?uuid=3b28b790-ec4e-45c5-a8d0-6d6f2088da65,750cd8ba-1799-4dfb-
8c74-548010e95e5e&type=postgres-server&force=true"
8.
9. {
11. "properties" : {
12. },
13. "scopes" : [ ],
14. "uuids" : {
15. "qpid-server" : [ ],
16. "postgres-server" : [ ]
17. },
18. "consumer-groups" : [ ],
19. "data-processors" : {
20. }
23. }
24.
3.
Example:
4. curl -v -X DELETE
"http://localhost:8080/v1/analytics/groups/ax/axgroup-001"
5. {
"properties" : {
},
"scopes" : [ ],
"uuids" : {
},
"consumer-groups" : [ ],
"data-processors" : {
}
* Connection #0 to host localhost left intact
* Closing connection #0
}
STEP 6: Check if the group is
completely removed
1. Use the following management
API to check if the particular
analytics group has been
completely removed:
2. curl -v -u admin@email.com -X GET "http://{mgmt-server-
host}:8080/v1/analytics/groups/ax
3.
Example:
4. curl localhost:8080/v1/analytics/groups/ax
5. [ {
"name" : "axgroup001",
"properties" : {
"consumer-type" : "ax"
},
"scopes" : [ "017pdspoint~dev", "010test~dev", "019charmo~dev",
"009gcisearch1~dev", "000fj~trial-fjwan", "009dekura~dev",
"008pisa~dev", "004fjadrms~dev", "018k5billing~dev",
"004study14~dev", "001teama~dev", "005specdb~dev", "test~dev",
"000fj~prod-fjwan", "012pjweb~dev", "020workflow~dev",
"007ikou~prod-fjwan", "003asano~dev", "013mims~dev",
"006studyhas~dev", "006efocus~dev", "002wfproto~dev",
"016mediaapi~dev", "015skillnet~dev", "014aclmanager~dev",
"010fjppei~dev", "000fj~trial", "003esupport~dev", "000fj~prod",
"005ooi~dev", "test~elb1", "007fjauth~dev", "011osp~dev",
"002study~dev" ],
"uuids" : {
"qpid-server" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"aries-datastore" : [ ],
"postgres-server" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"dw-server" : [ ]
},
"consumer-groups" : [ {
"name" : "consumer-group-001",
"consumers" : [ "5c1e9690-7b58-499a-a4bb-d54454474b8f",
"7794c428-e553-4ed2-843d-69f93bbec8a3" ],
"datastores" : [ "3b28b790-ec4e-45c5-a8d0-
6d6f2088da65:750cd8ba-1799-4dfb-8c74-548010e95e5e" ],
"properties" : {
}
} ],
"data-processors" : {
}
} ]
Step 8: Verify
Verify if the data shows up in the
analytics dashboards.
g.
h.
On Apigee, the Routers are configured to log only the error messages in the error
log files by default. However, there could be many situations where you may need
to gather more information to determine why a specific error occurred. One of the
ways to do this is to configure the Router to operate in debug mode so that you can
get debug logs, which can help you to gain more information about the error and
resolve it faster.
This document explains how to enable debug logs on the Apigee Edge’s Router for
the requests on a specific virtual host. Debug logging can be enabled to capture
more information when there are any issues such as malformed request, 400 Bad
Request - SSL Certificate Error, on the Northbound (between the Client application
and Router).
Note: It is recommended to have the default configuration that is to log only the error messages
for NGINX to avoid any performance implications. However, if you need to gather additional
debug information to debug a specific issue, then ensure that you are enabling the NGINX
debug log on the Router for only a short period of time. Once you have gathered the required
information, immediately revert back to the normal mode.
Required permissions: To perform this task, you must have the following permissions:
● Access to the specific Routers, where you would like to enable debug logs.
● Access to edit the Router component virtual host conf files in the
/opt/nginx/conf.d/ folder.
The following steps describe how to locate the relevant virtual host configuration
file on the Router:
1. If you know the organization name, environment name and virtual host for
the specific API request which you would like to debug, then determine the
virtual host conf file as follows:
a. Navigate to the /opt/nginx/conf.d/ directory.
b. Search for file ORG_NAME_ENV_NAME_VIRTUALHOST.conf file in
the conf.d directory using the following command:
c. ls -ltrh | grep "ORG_NAME_ENV_NAME_VIRTUALHOST_NAME"
2. If you don't know the organization name, you can identify the virtual host
configuration file by using the host alias name that is used in the API request
as follows:
Navigate to the /opt/nginx/conf.d/ directory and search for hostalias
with which the request was made using the following command:
3. ls -ltrh | grep -r 'HOST_ALIAS_NAME'
4. Sample output:
5. Let’s say the host alias name is opdk.cert-test.com. When you run the
ls -ltrh command, then you will see the output as shown below:
6.
7. Note: Multiple configuration files will be shown in case of using the same host alias on
different ports. Identify the port number on which API requests have been made to
identify the corresponding virtual host configuration file.
The following steps describe how to enable debug logs on the Apigee Routers for a
specific virtual host.
Note: The techniques described in this content are available in Apigee Edge for Private Cloud
only.
Introduction
When troubleshooting, you may want to execute APIs directly against Apigee
components such as a router or message processor. For example, you might want
to do this in order to:
● Debug intermittent issues with certain API requests that point to an issue
with a particular Apigee component (router/message processor).
● Gather more diagnostic information by enabling debug mode on a specific
instance of an Apigee component.
● Rule out that the issue is caused by a specific Apigee component.
● Discover whether operations such as spawning up a new instance or
restarting the instance have the desired impact.
Prerequisites
● Direct access to the router or message processor components against which
API requests need to be run.
● The cURL tool needs to be installed on the particular instance of the
component.
● The API request you want to test in cURL format.
For example, here's a curl command that you could use to make a request
to an API proxy from your local machine:
● curl https://myorg-test.mycompany.com/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● Note: You can identify issues such as missing deployments with even a partial curl. As
long as traffic is sent to the correct proxy for which only the correct host and basepath
are necessary, you can run useful tests without having headers that would be necessary
to get a success response, such as an Authorization header. In this case, the above
example without the header would be a valid test case:
● curl https://myorg-test.mycompany.com/v1/customers
If the DNS entry of the host alias is configured to point to the Apigee Edge routers
(in other words, there is no Elastic Load Balancer (ELB)), then you could use the
following curl commands to make the API requests directly to a router:
If the DNS entry of the host alias is configured to point to an Elastic Load Balancer
(ELB), then you could use the following curl commands to make the API requests
directly to a router:
The default port on which the message processor listens for traffic from the Apigee
router is 8998. Therefore, for all cases where this port was not changed, traffic
needs to be sent directly to this port on a specific message processor instance, as
in the following example. The curl request has to be sent to URL
http://INTERNAL_IP_OF_MP:8998 along with the header X-Apigee.Host with
the value hostname including the port used in the virtual hosts as shown in the
following three examples:
Note: If your actual request has multiple headers you can pass them while making the API
requests directly to router or message processors as well in a similar way you would do outside
router/message processor.
Introduction
When troubleshooting, you may want to execute APIs directly against Apigee
components such as a router or message processor. For example, you might want
to do this in order to:
● Debug intermittent issues with certain API requests that point to an issue
with a particular Apigee component (router/message processor).
● Gather more diagnostic information by enabling debug mode on a specific
instance of an Apigee component.
● Rule out that the issue is caused by a specific Apigee component.
● Discover whether operations such as spawning up a new instance or
restarting the instance have the desired impact.
Prerequisites
● Direct access to the router or message processor components against which
API requests need to be run.
● The cURL tool needs to be installed on the particular instance of the
component.
● The API request you want to test in cURL format.
For example, here's a curl command that you could use to make a request
to an API proxy from your local machine:
● curl https://myorg-test.mycompany.com/v1/customers -H
'Authorization: Bearer AxLqyU09GA10lrAiVRQCGXzMi9W2'
● Note: You can identify issues such as missing deployments with even a partial curl. As
long as traffic is sent to the correct proxy for which only the correct host and basepath
are necessary, you can run useful tests without having headers that would be necessary
to get a success response, such as an Authorization header. In this case, the above
example without the header would be a valid test case:
● curl https://myorg-test.mycompany.com/v1/customers
If the DNS entry of the host alias is configured to point to the Apigee Edge routers
(in other words, there is no Elastic Load Balancer (ELB)), then you could use the
following curl commands to make the API requests directly to a router:
If the DNS entry of the host alias is configured to point to an Elastic Load Balancer
(ELB), then you could use the following curl commands to make the API requests
directly to a router:
The default port on which the message processor listens for traffic from the Apigee
router is 8998. Therefore, for all cases where this port was not changed, traffic
needs to be sent directly to this port on a specific message processor instance, as
in the following example. The curl request has to be sent to URL
http://INTERNAL_IP_OF_MP:8998 along with the header X-Apigee.Host with
the value hostname including the port used in the virtual hosts as shown in the
following three examples:
Note: If your actual request has multiple headers you can pass them while making the API
requests directly to router or message processors as well in a similar way you would do outside
router/message processor.