You are on page 1of 30

DBM LIME User Guide

Version 1.0.0-C7
Revision 1.3
Legal Notices
Copyright © 2016-2021 DBM Cloud Systems Inc. All rights reserved.
This material is proprietary and confidential. The document describes concepts, methods and ideas
that are legally protected intellectual property.
This document and the related software, hardware and other components may contain confidential and
proprietary information of DBM Cloud Systems Inc., and are made available solely for confidential use
by the intended recipient.
Unless expressly authorized by DBM Cloud Systems Inc., in a separate written agreement, no part of
this document or any related software may be used, reproduced, modified, disclosed, or distributed, in
any form or by any means.
Restricted Rights Legend: Any use, duplication or disclosure by the government is subject to the
restrictions set forth in FAR Section 12.212 and DFAR 227.7202.
DBM, the DBM cloud logos, and their variants are trademarks of DBM Cloud Systems Inc.
All other trademarks used herein are the trademarks or registered trademarks of their respective
owners.
DBM Cloud Systems Inc. may have patents, copyrights, trademarks, and other intellectual property
rights (or applications thereof) covering this document, its subject matter and any related software and
hardware. Unless expressly authorized by DBM Cloud Systems Inc. in a separate written agreement,
possession of this document does not give you any license under any of these rights.
DBM Cloud Systems Inc. makes no warranties, express or implied, and assumes no liability for any
damage or loss with respect to or resulting from the use of this document or any related software or
hardware. All information is subject to change without notice.

2
Contents

Introduction.............................................................................................................................................. 5
Minimum Requirements........................................................................................................................................ 5
Installation............................................................................................................................................... 6
Default User Information....................................................................................................................................... 6
Network Incoming / Outgoing Ports...................................................................................................................... 6
SSH Options......................................................................................................................................................... 7
ISO Install............................................................................................................................................................. 8
Ask Static IP....................................................................................................................................................................... 10
Ask Static IP & Boot Drive.................................................................................................................................................. 10
Endless Install Reboots...................................................................................................................................................... 11
Debugging Install Issues.................................................................................................................................................... 11
QCOW2 / VDI / VMDK Install............................................................................................................................. 12
Amazon EC2 AMI DBM Shared Private Image.................................................................................................. 12
Creating An Amazon EC2 AMI From VMDK...................................................................................................... 13
trust-policy.json.................................................................................................................................................................. 14
role-policy.json................................................................................................................................................................... 15
containers.json................................................................................................................................................................... 16
Monitoring AMI Import Completion..................................................................................................................................... 16
Console Access.................................................................................................................................................. 18
User Management.............................................................................................................................................. 18
Network Configuration........................................................................................................................................ 18
Upgrades............................................................................................................................................... 20
S3 Proxy................................................................................................................................................ 21
S3 Proxy Configuration....................................................................................................................................... 21
Proxy Endpoints / Credentials............................................................................................................................................ 21
Mirroring Limitations........................................................................................................................................................... 21
Proxy Examples................................................................................................................................................................. 22
SSL Certificate and SERVER_NAME................................................................................................................................ 22
S3CMD Utility..................................................................................................................................................... 23
Operations............................................................................................................................................. 26
Logging and Debug............................................................................................................................................ 27
Docker................................................................................................................................................................................ 27
Log Files............................................................................................................................................................................. 27
System Log File.................................................................................................................................................................. 27

Counters................................................................................................................................................ 28

3
NGINX Stub Status............................................................................................................................................. 28
DBM Status Counters......................................................................................................................................... 28

Table of Figures
Figure 1 - Install Screen......................................................................................................................... 8
Figure 2 - Ask Static IP......................................................................................................................... 10
Figure 3 - Boot Drive / Options Screen.................................................................................................10
Figure 4 - Login Console Screen.......................................................................................................... 18
Figure 5 - NetworkManager TUI........................................................................................................... 19

Revisions
1.0 Initial version
1.1 Add "cos2aws_all" config to proxy all buckets for an endpoint. See Proxy Examples.
Add SSL Certificate and SERVER_NAME section.
1.2 Add subjectAltName configuration example for generating a SSL Certificate.
1.3 Remove _all configs, since all is implied when BUCKET/BUCKET2 is not set.

4
Introduction
This document describes the installation and usage of DBM LIME (Live Intelligent Migration Engine).
The DBM LIME software provides a live proxy for S3 object storage traffic. The proxy server sits in the
data path in front of two S3 servers. The first server is the new server being migrated to, whereas the
old server is the one being migrated from. Applications would perform all S3 operations to the proxy
server instead of the original S3 server being migrated from. All writes are done to the new server
whereas reads will first try the new server but can fallback to reading from the old S3 server being
migrated from. The DBM LIME software allows config options for PUT, POST, DELETE operations to
also be mirrored to the old server.
The DBM LIME software runs on a dedicated virtual or bare-metal machine that is X64 CentOS7.X
compatible. This machine is used to configure and manage any number of LIME proxies. The amount
of CPU, RAM, Network bandwidth required can vary greatly depending on the S3 application traffic
load.
Every effort has been made by DBM Cloud Systems to address any known vulnerability issues with
updates being made available as necessary.
After installation, check the /etc/dbm/s3proxy.d/port443.env.sample for latest options on configuring the
proxy settings.

Minimum Requirements
● Intel / AMD X64 based server or virtual machine
● 20GB disk
● 4GB RAM (16GB+ RAM recommended)
● 2 CPUs (8+ CPUs recommended)
● 1GbE Ethernet (10GbE or faster recommended)
● CentOS7.X compatible

5
Installation
The DBM LIME software is installed to a dedicated virtual or bare-metal machine. It is not meant to
coexist with other application software on the same system or be an additional software stack installed
to an existing enterprise application server. The DBM LIME distribution is an X64 based CentOS7.X
install with yum package management as well as DBM docker containers.
The installation requires Internet access to download the latest DBM LIME docker containers as well as
update the system to the latest RPMs. The installation will be complete once it automatically reboots
and the console shows a dbmlime login prompt.
Five image types are available for installation: dvd iso, qcow2, vdi, vmdk, and Amazon AMI. For an
Amazon AMI, you must contact DBM support so that it can be shared to your account. Please email
support@dbmclouds.com with your Amazon Account # to request an AMI share. Instructions are
included further down in this manual for creating your own AMI from the vmdk image.
Example installation filenames:
dbm_lime-1.0.0-c7_dvd.iso
dbm_lime-1.0.0-c7.qcow2
dbm_lime-1.0.0-c7.vdi
dbm_lime-1.0.0-c7.vmdk (can be used to create an Amazon AMI)

After the system is booted for the first time there may be an additional 5 minute delay before the dbm-
init systemd service completes initial configuration of the system.

Default User Information


The following default user credentials are configured for ssh console and http status access. Note that
for images imported into a cloud environment like Amazon AWS EC2, ssh password access may have
been disabled during instance cloud-init. You may have to instead ssh in using the SSH Key
provisioned during instance creation.
Username: dbm (dbm user has sudo permissions)
Password: ~@dbmclouds@~ (tilde-at-dbmclouds-at-tilde)
Username: root
Password: DNZHUO_IDEQTQ
In the case of ssh pubkey access, you would pass your instance key pair file in the command-line:
ssh -i ~/dbmec2.pem dbm@Instance_public_IP

Network Incoming / Outgoing Ports


Incoming ports: TCP (22, plus any proxy ports, typically 443)

6
SSH Options
If you want to enable ssh password logins that may have been disabled by cloud-init:
sudo vim /etc/ssh/sshd_config
PasswordAuthentication yes
sudo systemctl restart sshd
Conversely, to allow only ssh pubkey access, set ‘PasswordAuthentication no’.

You may also want to consider disabling root ssh access:


sudo vim /etc/ssh/sshd_config
#PermitRootLogin yes
PermitRootLogin no
sudo systemctl restart sshd

Once logged in via the console as the dbm user, vi ~/.ssh/authorized_keys and add any additional
public ssh keys desired.

7
ISO Install
The iso is a bootable unattended self-install image that will overwrite / install to the first disk found on a
system or virtual machine. At the completion of install, the system will automatically reboot and
initialize the software at first boot.
Below is the boot screen for the iso image. It will automatically proceed to install within 10 seconds
unless a key is pressed like an arrow key. The iso is ejected from the mounted drive at the end of
install to prevent booting back to the installer.

Figure 1 - Install Screen

The default boot option should work well for most installs if a DHCP server is available on the network.

8
After first boot, you can always decide to change to a static IP by editing the ifcfg-eth0 file.
/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
TYPE=Ethernet
NM_CONTROLLED=no
IPADDR=192.168.122.235
NETMASK=255.255.255.0
GATEWAY="192.168.122.1"
DNS1="8.8.8.8"
DEFROUTE="yes"

9
Ask Static IP
The default install will use DHCP networking for the first ethernet device. If you need to use another
device or a manual IP then a static IP must be provisioned.

Figure 2 - Ask Static IP

Ask Static IP & Boot Drive


The default install will use DHCP networking for the first ethernet device (eth0) and the first disk (xvda,
vda, sda, hda, or nvme0n1). If you need to select static IP provisioning and/or a different boot / install
drive, arrow down to one of the other menu options and hit ENTER. If choosing one of the alternate
menu boot options drops you to a root shell, then the video device for the VM may have to be changed
to properly display the query screen. In the case of KVM, try Video QXL.

Figure 3 - Boot Drive / Options Screen

10
Endless Install Reboots
At the end of install, the kickstart installer will eject the virtual cdrom with the install iso to avoid endless
reboot / install cycles. Some versions of KVM may not eject the install iso on reboot. There are two
options: force off the VM when the CD Install menu shows on the console and manually eject the CD,
or provision the VM to not restart on reboot:
virsh edit dbmlime1
<on_reboot>destroy</on_reboot>
This setting allows you to manually eject the iso image from the virtual cdrom after install has rebooted
as the VM will be stopped. Once ejected, you can re-enable restart on reboot.
virsh edit dbmlime1
<on_reboot>restart</on_reboot>

Debugging Install Issues


The installer uses the standard CentOS 7.8 kickstart installer with anaconda. Virtual terminals are
available Ctrl-Alt-F1 to F6 to switch to log screens or root shell on the console. The installer log files
can be found in /tmp and may help provide some clue as to what went wrong.
Check to make sure that your VM install disk is at least 20GB in size and that external internet access
is available.
Please contact DBM Support (email: support@dbmclouds.com) if you are unable to proceed.

11
QCOW2 / VDI / VMDK Install
The qcow2, vdi, and vmdk images can be imported to create virtual machines. The software will
initialize on first boot. If you have access to a Linux machine, you can embed your own ssh public key
prior to importing the image using virt-sysprep. In the example below, the public key to be embedded
into the dbm user’s authorized_keys is located at /tmp/id_rsa.pub.

virt-sysprep -a dbm_lime-1.0.0-C7.qcow2 --ssh-inject dbm:file:/tmp/id_rsa.pub

The images come preconfigured with eth0 set for DHCP addressing.

Amazon EC2 AMI DBM Shared Private Image


To avoid the hassle of creating an AMI from the DBM VMDK image, some customers opt for the DBM
private image for a release to be shared with them. Contact DBM customer support
(support@dbmclouds.com) to arrange for the shared private image.
Installing the EC2 AMI image is as simple as selecting the AMI shared to you and launch an instance
from it. The ssh key you provide when launching the instance will also be assigned into the dbm user's
authorized_keys. Make sure that your security group assigned to the instance allows port 22 incoming
access for your subnet or IP as well as for the proxy port(s) you intend to use such as 443.
Once the VM is booted, ensure that DBM initialization has completed before attempting to launch a
proxy. Use 'sudo systemctl status dbm-init' to verify initialization status.

12
Creating An Amazon EC2 AMI From VMDK
Amazon info on importing images to create an AMI:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-vm

Ensure that your IAM User has the proper roles and policies in place to perform the import. Please
reference:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html

The vmdk image should include the latest nvme and ena drivers allowing it to be safely installed on
Amazon’s new Nitro based instance types.

You will need the AWS CLI, the following are the steps for a linux system:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o \
"awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version

Create aws credentials with access/secret keys for use with ‘aws s3’ and ‘aws ec2’ commands:
aws configure

Test that listbuckets works:


aws s3 ls

Upload the vmdk to your import bucket that must already exist (dbminstall for this example):
aws s3 cp ./dbm_lime-1.0.0-c7.vmdk s3://dbminstall/
You will then need to create three json files: trust-policy.json, role-policy.json, and containers.json.

13
trust-policy.json
Create / edit a new file with ‘vim trust-policy.json’:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}

After saving the file, run the following command:


aws iam create-role --role-name vmimport –assume-role-policy-document \
"file://./trust-policy.json"

14
role-policy.json
The example file below assumes the vmdk image was copied to s3://dbminstall/. Create / edit a new
file with ‘vim role-policy.json’:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::dbminstall",
"arn:aws:s3:::dbminstall/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::export-bucket",
"arn:aws:s3:::export-bucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
15
After saving the file, run the following command:
aws iam put-role-policy --role-name vmimport --policy-name vmimport \
--policy-document "file://./role-policy.json"

containers.json
The example file below assumes the vmdk image was copied to s3://dbminstall/. Create / edit a new
file with ‘vim containers.json’:
[
{
"Description": "DBM LIME 1.0.0-C7",
"Format": "vmdk",
"UserBucket": {
"S3Bucket": "dbminstall",
"S3Key": "dbm_lime-1.0.0-c7.vmdk"
}
}
]
After saving the file, run the following command:
aws ec2 import-image --description "DBM LIME 1.0.0-C7" –disk-containers \
"file://./containers.json"

Monitoring AMI Import Completion


The ‘aws ec2 import-image’ command returns with output similar to the following:
{
"Description": "DBM LIME 1.0.0-C7",
"ImportTaskId": "import-ami-0c3db66761eb3898d",
"Progress": "1",
"SnapshotDetails": [
{
"Description": "DBM LIME 1.0.0-C7",
"DiskImageSize": 0.0,
"Format": "VMDK",
"UserBucket": {
"S3Bucket": "dbminstall",
"S3Key": "dbm_lime-1.0.0-c7.vmdk"
}
}
],
"Status": "active",
"StatusMessage": "pending"
}

16
Please take note of the “ImportTaskId” value returned in the output as highlighted above. This will be
needed as an input parameter to check on import status. The import operation can take between 30 to
60 minutes on average.
Check on the progress of import, looking for "Status": "completed":
aws ec2 describe-import-image-tasks –import-task-ids \
import-ami-0c3db66761eb3898d

Example Output:
{
"ImportImageTasks": [
{
"Architecture": "x86_64",
"Description": "DBM LIME 1.0.0-C7",
"ImageId": "ami-0141bdc44987ed77d",
"ImportTaskId": "import-ami-0c3db66761eb3898d",
"LicenseType": "BYOL",
"Platform": "Linux",
"SnapshotDetails": [
{
"Description": "DBM LIME 1.0.0-C7",
"DeviceName": "/dev/sda1",
"DiskImageSize": 1377860096.0,
"Format": "VMDK",
"SnapshotId": "snap-01e2dfc442ba656ae",
"Status": "completed",
"UserBucket": {
"S3Bucket": "dbminstall",
"S3Key": "dbm_lime-1.0.0-c7.vmdk"
}
}
],
"Status": "completed",
"Tags": []
}
]
}

Check the Amazon AWS EC2 console for Images→AMIs and you should see a new AMI under images
“Owned By Me”. The “ImageId” from the “aws ec2 describe-import-image-tasks” output should match
the “AMI ID” column value on the console. Select the AMI image and click the “Launch” button to start
the launch wizard. Be sure to configure a Security Group Policy which allows access for the ports
described in Network Incoming / Outgoing Ports. You will also need the ssh Key Pair created or
assigned to the instance for access, since permissions are pubkey only.

17
Console Access

Figure 4 - Login Console Screen

Default user: dbm


Default password: ~@dbmclouds@~

Verify that first time system initialization is complete:


sudo systemctl status dbm-init

User Management
Use the ‘passwd’ command to change the password after ssh or console login. The dbm user includes
full sudo access to the system. Use ‘sudo passwd’ to change root’s password.
WebUI passwords are tracked separately:
sudo htpasswd -5 /var/opt/dbm/.htpasswd dbm

18
Network Configuration
The system can be configured following standard CentOS7.X instructions. By default the first network
interface found will attempt to use DHCP to obtain an IP address unless you chose one of the Static IP
install menu options.
Network interface configuration files (ifcfg-*) can be found in:
/etc/sysconfig/network-scripts/
For a menu based user interface network configuration:
sudo nmtui

Figure 5 - NetworkManager TUI

19
Upgrades
DBM LIME software consists of one docker container as well as DBM-specific RPMs. By default, the
docker image is updated on a system reboot.
The CentOS7 yum package manager is used for rpm updates and can be performed using:
sudo yum update -y
To control which software is upgrade on a ‘yum update’, the individual repos can be enabled / disabled
using yum-config-manager. Only the DBM yum repo is enabled by default. If you wanted to enable all
the additional repos for updates:
sudo yum-config-manager --enable extras,base,epel,updates
If the kernel was updated, those changes would not take effect until a reboot. To reboot the system:
sudo reboot
The DBM docker containers check for updates on every reboot. To prevent them from upgrading, you
can touch a specific file that the DBM services look for on startup:
sudo touch /.no_dbm_pulls
Simply remove that file if you want to resume DBM docker container upgrades on reboot.
The DBM software docker container has one component:
● dbm-s3proxy (S3 proxy engine)

20
S3 Proxy
The main method to control DBM S3 proxy operations is via the dbm-s3proxy systemd service. It is an
instantiated service, referenced by port number. The proxy configuration files reside in
/etc/dbm/s3proxy.d/.

S3 Proxy Configuration
A DBM S3 proxy requires a configuration file that contains endpoints, buckets, and credentials.
A sample config file is located at: /etc/dbm/s3proxy.d/port443.env.sample and contains comments
about the variables involved.
Depending on system resources and client application load, more than one proxy can be run on a
server. Each proxy is run using a different port number. The most common proxy mode is to run a
proxy on port 443 (https) for the DBM LIME server.
Copy the sample and edit to customize for your configuration:
sudo cp /etc/dbm/s3proxy.d/port443.env.sample /etc/dbm/s3proxy.d/port443.env
sudo vim /etc/dbm/s3proxy.d/port443.env
The S3 proxy sits in the datapath for application servers to send s3 requests to. All requests are first
passed to destination endpoint1. If a GET or HEAD operation results in a 404 (Not Found), the request
is then sent to source endpoint2. If mirroring is enabled then PUT / POST / DELETE operations are
also sent to endpoint2. Which commands are mirrored is also controllable within the proxy
configuration file. For example, you might choose to only mirror DELETEs or disable mirroring
altogether.
Mirrored subrequests are not checked for responses but must complete for the main request to
complete. A slow endpoint2 mirror will impact client application response times for PUT / POST /
DELETE operations.

Proxy Endpoints / Credentials


The proxy configuration contains the endpoints and credentials. Some object storage services provide
internal endpoints separate from public endpoints for faster internal cloud access. The internal endpoint
can offer a significant performance advantage if the DBM LIME engine can use it to avoid routing over
public networks. The endpoints are specified without the 'https://' prefix, which is assumed.
Credentials are specified without any surrounding quotation marks.

Mirroring Limitations
S3 operations that require multiple commands which are stateful, are not supported for mirroring. For
example, multipart writes involves a POST operation to initiate the mulitpart upload, multiple PUT
operations and then a final POST to signify completion. The initial POST is assigned an uploadID,
which is then passed into subsequent PUTs and the completion POST. Mirroring currently mirrors
commands as-is to endpoint2. The initial POST to the mirror endpoint2 would likely be assigned a
different uploadID than was assigned to endpoint1. Subsequents PUTs would be dropped since the
uploadID passed in would be invalid for endpoint2.

21
Proxy Examples
The following example is a stripped down proxy config without the port443.env.sample comments. The
endpoint2 is an IBM COS on prem server that the user wants migrated to AWS (endpoint1). All writes
and deletes will also be mirrored to the old server on endpoint2. It is common for BUCKET1 and
BUCKET2 to have the same value. For the AWS endpoint s3.amazonaws.com, a region of us-east-1 is
assumed. If the AWS bucket resided in us-west-1, then the region values and endpoint would need to
be adjusted accordingly: S3_ENDPOINT=s3.us-west-1.amazonaws.com and REGION=us-west-1.

DBM_CONFIG=cos2aws
WORKERS=auto
CONNECTIONS=1024
MIRROR_CMDS=DELETE|POST|PUT
# Endpoint1 - destination
AWS_KEY_ID=AKIAYUQA28YY9273JKAHH898
AWS_SECRET=3KJH2388HXZKJHL+12987HJAB91NXKJH0A98UHBBKAJHU
S3_ENDPOINT=s3.amazonaws.com
BUCKET=dbmtest
REGION=us-east-1
# Endpoint2 - mirror and fallback for GET / HEAD on endpoint 1 that 404's.
AWS_KEY_ID2=l39shkJH89udahkkjHQGY88gha
AWS_SECRET2=1jkhdAKUH9c9jhsuhHDUHa0jHSIHD9haALJH8h82SLD3
S3_ENDPOINT2=209.160.47.15
BUCKET2=dbmtest
REGION2=us-east-1

If you wanted to proxy all S3 commands for all buckets, the 'cos2aws' value would still be used for
DBM_CONFIG. In this case the BUCKET / BUCKET2 values are not specified. This allows the proxy
on a single port to handle all bucket commands for Endpoint1 with an Endpoint2 backend. The only
restriction is that any buckets accessed on Endpoint1 must also exist with the same name on
Endpoint2.
DBM_CONFIG=cos2aws
WORKERS=auto
CONNECTIONS=1024
MIRROR_CMDS=DELETE|POST|PUT
# Endpoint1 - destination
AWS_KEY_ID=AKIAYUQA28YY9273JKAHH898
AWS_SECRET=3KJH2388HXZKJHL+12987HJAB91NXKJH0A98UHBBKAJHU
S3_ENDPOINT=s3.amazonaws.com
REGION=us-east-1
# Endpoint2 - mirror and fallback for GET / HEAD on endpoint 1 that 404's.
AWS_KEY_ID2=l39shkJH89udahkkjHQGY88gha
AWS_SECRET2=1jkhdAKUH9c9jhsuhHDUHa0jHSIHD9haALJH8h82SLD3
S3_ENDPOINT2=209.160.47.15
REGION2=us-east-1

SSL Certificate and SERVER_NAME


The SERVER_NAME value defaults to s3.dbmclouds.tools, which can be overridden and is tied to the
default SSL certificate included on the system. The SERVER_NAME value is also used in determining
virtual path parsing from clients. A virtual path is where the client specifies the bucket name in the host
path for the request. For example, mybucket.s3.dbmclouds.tools, the SERVER_NAME default would
22
be parsed out leaving mybucket as the bucket name. The certificate tree is located in /etc/pki/nginx
and is virtually mounted read-only into each dbm-s3proxy port proxy docker container. If you wanted to
generate your own self-signed certificate or replace with a public certificate, the following files would
need to be replaced:
/etc/pki/nginx/dhparam.pem
/etc/pki/nginx/server.crt
/etc/pki/nginx/private/server.key

To generate a new dhparam.pem, do the following:


sudo openssl dhparam -out /etc/pki/nginx/dhparam.pem 4096

It is recommended to update /etc/pki/tls/openssl.cnf to add subjectAltName values for your VMs public
hostname or IP address before using openssl to generate the certificate. Default values can also be
filled into openssl.cnf for the prompted questions during openssl certificate generation.
sudo cp /etc/pki/tls/openssl.cnf /etc/pki/tls/openssl.cnf.orig
sudo vim /etc/pki/tls/openssl.cnf
Look for the 'v3_ca' section and add a 'subjectAltName' line with your custom values. Example:
[ v3_ca ]
subjectAltName=email:copy,DNS:s3.mycompany.mydomain,IP:192.168.122.101
To generate a new self-signed certificate, with a 1 year expiration:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 \
-keyout /etc/pki/nginx/private/server.key \
-out /etc/pki/nginx/server.crt
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:Nowhere
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My, Inc.
Organizational Unit Name (eg, section) []:Web Security
Common Name (e.g. server FQDN or YOUR name) []:s3.mycompany.mydomain
Email Address []:support@mycompany.com

In the port env configuration file, add:

SERVER_NAME=s3.mycompany.mydomain

S3CMD Utility
In order to inspect available buckets and objects, the open source utility s3cmd is included on the
system install. It can be separately used to confirm bucket access credentials. You can also point it at
the proxy itself from the DBM LIME server terminal, using 127.0.0.1 and your proxy port for the "S3
Endpoint" and "DNS-style bucket+hostname:port template for accessing a bucket" questions. Note that
when pointing at the proxy, any commands which do not use the BUCKET path will be rejected. For
example, if BUCKET=dbmtest, then 's3cmd ls s3://testsrc1' will get a 403, but 's3cmd ls s3://dbmtest/'
would be OK.
$ s3cmd --configure

23
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the
env variables.
Access Key: AKIAYUQA28YY9273JKAHH898
Secret Key: 3KJH2388HXZKJHL+12987HJAB91NXKJH0A98UHBBKAJHU
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]:

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s"


vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]:

Encryption password is used to protect your files from reading


by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3


servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:

On some networks all internet access must go through a HTTP proxy.


Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
Access Key: AKIAYUQA28YY9273JKAHH898
Secret Key: 3KJH2388HXZKJHL+12987HJAB91NXKJH0A98UHBBKAJHU
Default Region: US
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y


Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...


Not configured. Never mind.

Save settings? [y/N] y


Configuration saved to '/home/dbm/.s3cfg'

24
Test that the configuration works:

$ s3cmd ls
2019-04-11 20:53 s3://testdst1
2019-04-12 20:38 s3://testdst2
2019-04-11 20:52 s3://testsrc1
2019-04-12 20:38 s3://testsrc2

$ s3cmd ls -r s3://testsrc1
2019-04-12 20:53 0 s3://testsrc1/LOGOS/
2019-04-12 20:53 516163 s3://testsrc1/LOGOS/Dell_EMC_logo.png
2019-04-12 20:53 237532 s3://testsrc1/LOGOS/EMC.png
2019-04-12 20:53 6235 s3://testsrc1/LOGOS/Google.png
2019-04-12 20:53 32109 s3://testsrc1/LOGOS/IBM_LOGO.JPG
2019-04-12 20:53 106607 s3://testsrc1/LOGOS/IBM_LOGO_BIG.JPG
2019-04-12 20:53 22130 s3://testsrc1/LOGOS/Microsoft.png
2019-04-12 20:53 10341 s3://testsrc1/LOGOS/alibaba-cloud-logo.png
2019-04-12 20:53 107962 s3://testsrc1/LOGOS/aws_logo.png
2019-04-12 20:53 6235 s3://testsrc1/LOGOS/google_cloud_logo.png
2019-04-12 20:53 5571 s3://testsrc1/LOGOS/oracle_logo.gif
2019-04-12 20:53 0 s3://testsrc1/distros/
2019-04-12 20:53 79712920 s3://testsrc1/distros/VirtualBox-5.2-5.2.18_124319_el7-1.x86_64.rpm
2019-04-12 20:53 53108736 s3://testsrc1/distros/dsl-4.11.rc1.iso

25
Operations
Proxy configurations are controlled by a /etc/dbm/s3proxy.d/portN.env file, where N is the port number
that the s3 proxy should listen on. Multiple proxies can be run using different port numbers. You could
even have one proxy port setup with mirroring and another without but using the same endpoints,
buckets and credentials.
The commands below assume that you have created an s3 proxy environment file
/etc/dbm/s3proxy.d/port443.env, although any port not in use could be used instead. Common alternate
ports to use are 8443 and 9443 The port 443 is the https port and allows application servers to
connect to the s3 proxy endpoint without explicitly specifying a port. Make sure that any upstream
firewalls make the proxy port being configured available to pass through.

To enable a proxy on port 443 to always start on server boot:


sudo systemctl enable dbm-s3proxy@443

To start a proxy on port 443:


sudo systemctl start dbm-s3proxy@443

To disable a proxy on port 443:


sudo systemctl disable dbm-s3proxy@443

To stop a proxy on port 443:


sudo systemctl stop dbm-s3proxy@443

To restart a proxy after making a configuration change:


sudo systemctl restart dbm-s3proxy@443

26
Logging and Debug

Docker
The s3 proxy runs as a docker per port, use the 'docker ps' command to see what is running:
docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
35b63877d4ef registry.dbmclouds.app/dbm-s3proxy:1.x.y
"/usr/bin/docker-e..." 2 seconds ago Up 1 second
0.0.0.0:8443->443/tcp dbm-s3proxy-8443
1d856ae28a19 registry.dbmclouds.app/dbm-s3proxy:1.x.y
T"/usr/bin/docker-e..." 55 minutes ago Up 55 minutes
0.0.0.0:443->443/tcp dbm-s3proxy-443

Log Files
For each proxy, a directory is created with an access.log and error.log:
ls -l /var/log/nginx/port*
/var/log/nginx/port443:
total 4
-rw-r--r--. 1 root dbm 310 Oct 5 12:10 access.log
-rw-r--r--. 1 root dbm 0 Oct 5 12:06 error.log

/var/log/nginx/port8443:
total 0
-rw-r--r--. 1 root dbm 0 Oct 5 13:04 access.log
-rw-r--r--. 1 root dbm 0 Oct 5 13:04 error.log

System Log File


The system log file is the CentOS standard /var/log/messages and must be accessed as root or using
sudo:
sudo less /var/log/messages

27
Counters
Two sets of counters are available: standard NGINX stub status, and DBM operation / mirror counters.
A basic authentication is used to allow access to the counter URLs. You can also use the curl
command included on the DBM LIME server to access counters from the terminal using 127.0.0.1 as
the IP.

NGINX Stub Status


https://<server-ip>/dbm_nginx_stub_status are described at:
https://nginx.org/en/docs/http/ngx_http_stub_status_module.html
Note that the stub status counters include requests to query the counters themselves.

curl -k --user "dbm:~@dbmclouds@~" https://127.0.0.1/dbm_nginx_stub_status


Active connections: 1
server accepts handled requests
9 9 12
Reading: 0 Writing: 1 Waiting: 0

DBM Status Counters


https://<server-ip>/dbm_status_counters are described below and are returned in JSON format. You
can access the counters on the DBM LIME terminal using curl and a 127.0.0.1 address or externally
using the server-ip and optional port that is configured.
The HTTP request methods are counted for DELETE, GET, HEAD, POST, and PUT. If a command is
mirrored it is counted in the 'mirrored' counter. Commands that result in access to endpoint2 (the
backend) have the relevant 'b_' request method value incremented.
b_delete DELETE operations to endpoint2, only if mirror DELETE enabled
b_get GET operations to endpoint2
b_head HEAD operations to endpoint2
b_post POST operations to endpoint2, only if mirror POST enabled
b_put PUT operations to endpoint2, only if mirror PUT enabled
delete DELETE operations to endpoint1
get GET operations to endpoint1
head HEAD operations to endpoint1
post POST operations to endpoint1
put PUT operations to endpoint1
forbidden 403 forbidden requests
mirrored Requests sent to endpoint2 for mirroring
mp_dropped PUT / POST requests for multipart stateful requests that are dropped
28
requests incoming request count regardless of method
The counters are not persistent or preserved in files anywhere and will be reset on service restart,
server reboot, or an explicit DELETE operation to the /dbm_status_counters URL.
curl -k --user "dbm:~@dbmclouds@~" https://127.0.0.1/dbm_status_counters
{"delete":0,"b_put":0,"post":0,"forbidden":0,"put":0,"b_get":9,"b_delete":0,
"mirrored":0,"requests":12,"b_post":0,"b_head":0,"mp_dropped":0,"head":0,"ge
t":12}

Clear the dbm_status_counters:


curl -k -X DELETE --user "dbm:~@dbmclouds@~" \
https://127.0.0.1/dbm_status_counters

29
Appendix A. HTTP Status Codes
The table below lists some of the more common HTTP status codes and their meanings.
Code Meaning
200 The request was successful
201 The requested create action was successful
202 The requested create action was accepted and is likely to be fulfilled
204 The server successfully processed the request and is not returning any content.
301 The requested resource has moved permanently (redirect)
302 Supplanted by 303 and 307
303 See Other; Issue a GET to other URI to get new URI
304 Not Modified; unmodified since version spec’d in header
305 Use Proxy; Proxy URI returned
307 The requested resource has moved temporarily (redirect)
400 The request itself was improper (invalid syntax)
401 The request failed authentication or access control checks
402 Payment is required
403 The request is not permitted
404 The resource requested does not exist
405 The request method was invalid for that URI
406 The request failed during processing and after authentication
410 The component requested is no longer on the system
423 The resource requested was locked at the time of the request
426 A client or server software upgrade is required
428 Precondition Required
429 Too Many Requests
431 Request Header Fields Too Large
509 Bandwidth budget has been exceeded

30

You might also like