You are on page 1of 31

MongoDB on Pure Storage

March 2016
Contents
Executive Summary ............................................................................................................................................... 4  

Introduction .............................................................................................................................................................. 4  

Audience .................................................................................................................................................................. 4  

Pure Storage Introduction.................................................................................................................................... 5  

MongoDB introduction ......................................................................................................................................... 7  

Solution Overview .................................................................................................................................................. 8  

Scalability and Consistent Performance .......................................................................................................... 9  

Instant cloning of MongoDB databases ........................................................................................................ 20  

Best Practices for MongoDB on Pure Storage ............................................................................................ 23  

Test Environment ................................................................................................................................................ 25  

Summary .................................................................................................................................................................27  

Appendix-A Network bond settings ............................................................................................................... 28  

References ............................................................................................................................................................ 29  

About the Author ................................................................................................................................................. 30  

© Pure Storage 2016 | 2


© 2016 Pure Storage, Inc. All rights reserved. Pure Storage, the "P" Logo, and Pure1 are trademarks or
registered trademarks of Pure Storage, Inc. in the U.S. and other countries. mongo, mongoDB, the
mongoDB leaf are registered trademarks of MongoDB, Inc in the U.S. and other countries. The Pure
Storage product described in this documentation is distributed under a license agreement and may be
used only in accordance with the terms of the agreement. The license agreement restricts its use,
copying, distribution, decompilation, and reverse engineering. No part of this documentation may be
reproduced in any form by any means without prior written authorization from Pure Storage, Inc. and its
licensors, if any.

THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED


CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT ARE
DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE
INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.

Pure Storage, Inc. 650 Castro Street, Mountain View, CA 94041


http://www.purestorage.com

© Pure Storage 2016 | 3


Executive Summary
This document provides a technical validation and best practices for deploying MongoDB 3.2 on Pure
Storage FlashArray. It also discusses the solution benefits and provides insights into the various tests
that were performed to show case the superioity of the storage array in handling hu”mongo”us datasets.

This whitepaper also discusses the solution benefits for deploying Pure Storage FlashArray in a MongoDB
environment, including:

•   Consistent and scalable performance


•   Simplified Management
•   Seamless snapshots and cloning of MongoDB databases in seconds

Introduction
MongoDB is a high-performance, scalable, open source, schema-free, document oriented database
designed for a broad array of modern applications and cloud environments. MongoDB is the industry’s
fastest growing database and it is used by organizations across all walks of life to power on-line,
operational applications where key metrics like high throughput, low latency and high availability are
paramount requirements.

Modern day application demands rich and dynamic data structures, easy scaling, better performance and
low TCO as the customer & business requirements changes rapidly. MongoDB’s dynamic data structure,
ability to index and query data, and auto-sharding makes it a strong tool that adapt to changes well. It
tremendously reduces the complexity comparing to traditionsl RDBMS. On top of this, the Internet of
Things (IoT) and Big Data demands new requirements from the database like scalability, flexbility,
analytics and unified view which MongoDB by its virtue of design philosophy and implementation meets.
MongoDB’s Nexus Architecture blends the best of relational and NoSQL technologies.

Pure Storage FlashArray is the perfect companion to support MongoDB when the working set no longer
fits in the memory and spills over to the storage sub-system which still requires low latency and
consistently high performant response times.

Audience
The target audience for this document includes storage administrators, MongoDB administrators, Data
architects,engineers and partners who want to deploy Pure Storage FlashArray in a MongoDB
environment.

© Pure Storage 2016 | 4


Pure Storage Introduction
Pure Storage is the leading independent all-flash enterprise array vendor, committed to enabling
companies of all sizes to transform their businesses with flash.

Built on 100% consumer-grade MLC flash, Pure Storage FlashArray delivers all-flash enterprise storage
that is 10X faster, more space and power efficient, more reliable, and infinitely simpler, and yet typically
costs less than traditional performance disk arrays.

Figure 1. FlashArray//m

The Pure Storage FlashArray is ideal for:

Accelerating Databases and Applications Speed transactions by 10x with consistent low latency, enable
online data analytics across wide datasets, and mix production, analytics, dev/test, and backup workloads
without fear.

Virtualizing and Consolidating Workloads Easily accommodate the most IO-hungry Tier 1 workloads,
increase consolidation rates (thereby reducing servers), simplify VI administration, and accelerate
common administrative tasks.

Delivering the Ultimate Virtual Desktop Experience Support demanding users with better performance
than physical desktops, scale without disruption from pilot to >1000’s of users, and experience all-flash
performance for under $100/desktop.

Protecting and Recovering Vital Data Assets Provide an always-on protection for business-critical data,
maintain performance even under failure conditions, and recover instantly with FlashRecover.

Pure Storage FlashArray sets the benchmark for all-flash enterprise storage arrays. It delivers:

Consistent Performance FlashArray delivers consistent <1ms average latency. Performance is optimized
for the real-world applications workloads that are dominated by I/O sizes of 32K or larger vs. 4K/8K hero
performance benchmarks. Full performance is maintained even under failures/updates.

Less Cost than Disk Inline de-duplication and compression deliver 5 – 10x space savings across a broad
set of I/O workloads including Databases, Virtual Machines and Virtual Desktop Infrastructure.

© Pure Storage 2016 | 5


Mission-Critical Resiliency FlashArray delivers >99.999% proven availability, as measured across the Pure
Storage installed base and does so with non-disruptive everything without performance impact.

Disaster Recovery Built-In FlashArray offers native, fully-integrated, data reduction-optimized backup and
disaster recovery at no additional cost. Setup disaster recovery with policy-based automation within
minutes. And, recover instantly from local, space-efficient snapshots or remote replicas.

Simplicity Built-In FlashArray offers game-changing management simplicity that makes storage
installation, configuration, provisioning and migration a snap. No more managing performance, RAID, tiers
or caching. Achieve optimal application performance without any tuning at any layer. Manage the
FlashArray the way you like it: Web-based GUI, CLI, VMware vCenter, Rest API, or OpenStack.

Table 1. Pure Storage FlashArray//m Series.

Start Small and Grow Online


FlashArray scales from smaller workloads to data center-wide consolidation. And because upgrading
performance and capacity on the FlashArray is always non-disruptive, you can start small and grow
without impacting mission-critical applications. Coupled with Forever Flash, a new business model for
storage acquisition and lifecycles, FlashArray provides a simple and economical approach to evolutionary
storage that extends the useful life of an array and does away with the incumbent storage vendor
practices of forklift upgrades and maintenance extortion.

© Pure Storage 2016 | 6


MongoDB introduction
MongoDB, the leading NoSQL database software
excels in many use cases where relational databases
are not a good fit, like applications with unstructured,
semi-structure and polymorphic data, as well as
applications with large scalability requirements.

MongoDB is designed to combine the critical capabilities of relational databases with the innovations of
NoSQL technolgies. Relational databases have been around for many years and offers key features that
are still being taken advantage of;

Expressive Query language. Users should be able to access and manage their data with powerful query,
projection, aggregation and update operators, to support both operational and analytical applications.

Secondary Indexes. Indexes still play a key role in providing efficient and faster acces to data for both
reads and writes.

Strong consistency. Applications should be able to immediately read what has been written to the
database.

While the above features are still used by modern applications, there are other requirements that are not
addressed by relational databases and that has driven the development of NoSQL databases which
provides:

Flexible Data Model. NoSQL databases flourished to address the key requirements for the data which is
prevalent in the modern applications, named flexible data model. All NoSQL databases offer flexible data
model making it easy to store and group data of any structure and allow dynamic changes to the schema
without any downtime.

Elastic Scalability. NoSQL databases were all built with a focus on scalability, so they all include some
form of sharding or partitioning, allowing the database to scale-out on commodity hardware deployed on-
premise or in the cloud.

High Performance. NoSQL databases are designed to delive etreme performance, measured in terms of
bboth throughput and latency at any scale.

With MongoDB, organizations can address diverse application needs, hardware resources, and
deployment designs with a single database technology. Through the use of a flexible storage
architecture, MongoDB can be exended with new capabilities, and configured for optimal use of specific
hardware architectures. MongoDB allows users to mix and match multiple storage engines within a single
deployment.

MongoDB 3.2 ships with four supported storage engines, all of which can coexist within a single
MongoDB replicaset. This makes it easy to evaluate and migrate between them, and to optimize for
specific application requirements.

© Pure Storage 2016 | 7


•   The default WiredTiger storage engine. For many applications, WiredTiger’s granular concurrency
control and native compression will provide the best all round performance and storage efficiency
for the boradest range of aplications.

•   The Encrypted storage engine protecting highly sensitive data, without the performance or
management overhead of separate filesysm encryptions.

•   The In-Memory storage engine delivering the extreme performance coupled with real time
analytiscs for the most demanding, latency-sensitive applitions.

•   The MMAPv1 engine, an improved version of the storage engine used in pre-3.x MongoDB
releases.

Solution Overview
The standard MongoDB deployment involves individual sets or pods of compute, memory and storage
resources put together which seems cost effective. Hence the adoption rate of such config at cloud
providers like AWS is very prominent and works well for a small scale. Companies not using IaaS, prefer
to use direct attached storage (DAS) as an alternate for cost effective solution. It is also not uncommon to
see the use of server PCIe flash cards which gives superior performance but very quickly runs out of
capacity and are operationally expensive to deploy and maintain.

The challenge with these pod models arises when the companies look to scale. The need for consistent
performance becomes paramount and neither DAS nor PCIe flash cards can provide the level of
efficiency, capacity, availability and sustained performance.

Pure Storage FlashArray is the ideal solution to bridge the gap and support high availability, low latency
requirements of MongoDB deployments. We are now seeing many of our customers consolidating their
MongoDB databases on to Pure Storage FlashArray not only to address the reasons stated above but to
take advantage of the various features Pure Storage FlashArray offers like simplicity and ease of
management, snapshots and database cloning, industry leading data reduction, high availability and
resiliency, consistent and scalable performance.

© Pure Storage 2016 | 8


Scalability and Consistent Performance
Pure Storage FlashArray was built ground up to tackle the challenges with flash storage like write
amplication, limited media endurance and garbage collection. It is the ideal storage system for MongoDB
databases to take advantage of the accelerated and consistent performance at low latency under mixed
workloads.

Scalability and performance tests were performed using mongoperf and YCSB tools.

Mongoperf
Mongoperf is the utility to check the disk I/O performance independently of MongoDB. It is a very useful
utility to simulate the I/O operations of MongoDB. It times the random disk I/O activities and presents the
results. To specifically validate the I/O performance without the filesystem cache in to the play, we ran
the tests with mmf (memory mapped files) option set to false, which enables direct I/O access. Mongoperf
accepts various configuration fields to vary the workloads. We used the blocksize (mongoperf.recSizeKB)
of 8KB throughout the tests. The default blocksize is 4KB.

YCSB
The Yahoo! Cloud Serving Benchmark (YCSB) has become the standard benchmarking tool for
evaluating/testing NoSQL database systems. The original YCSB benchmark was developed by Yahoo!
Research division who released it in 2010 with the stated goal of “facilitating performance comparisons of
the new generating of cloud data serving systems”.

The YCSB test consists of loading the dataset into the MongoDB databases and executing multiple
workloads of various read/write ratios.

© Pure Storage 2016 | 9


Scalability Tests of MongoDB on Pure Storage
We performed the following tests using mongoperf to simulate the MongoDB operations and captured
the IOPS, bandwidth and latency metrics.

•   Node scalability tests with 1, 2 and 4 nodes


•   Threads scalability with threads 2,4,8,16,32,64,128 in each node
•   Varying workloads
o   100% write
o   50% write, 50% read

All these tests were run with mmf (memory mapped files) set to false to test and validate direct I/O
configuration which will be the worst case scenario when all I/O requests were not fulfilled out of RAM
and spilling over to the storage subsystem.

The Pure Storage luns were mounted on the host using XFS filesystem and no LVMs were used.

Test file creation

Mongoperf creates the file everytime it is run with the file size provided. The following picture illustrates
the performance metrics from the Pure Storage FlashArray//m50 when the file was created for the
following mongoperf parameters on a single node.

{nThreads: 32, fileSizeMB: 1000000, mmf: false, r:false, w:true}

Figure 2. Test file creation

The test showed that the host was able to get 1.7 GB/s of bandwidth from Pure Storage at an average IO
size of 504KB.

© Pure Storage 2016 | 10


100% Writes Scalability
Simulated 100% MongoDB write operations using mongoperf to validate the node and thread scalability.
Following graphs illustrates the scalability results of 100% writes across 1 to 4 nodes with varying threads
against Pure Storage FlashArray. The Graph 1 illustrates the node scalability, where the same results
were plotted against the threads across 1 to 4 nodes. For example, 32 threads in x-axis means, each
node was running 32 threads which is equivalent of 64 threads in the case of 2 nodes and 128 threads in
the case of 4 nodes. This graph is the standard way to measure the node scalability.

Node Scalability - 100% writes


250000 221520 214600
188072
200000
161415 184201
Throughput (Ops/sec)

168318
150000 123625 143663
122537
100000 79273
115078
106239
41130 71036 88093
50000 67491
39962
21312 38480
0 21783
10834
4 8 16 32 64 128 256
Threads
1 Node 2 Nodes 4 Nodes

Graph 1. Node scalability for all writes workload

The test results showed steady scalability across various nodes. In the 4 nodes scenario, the throughput
climbed up steadily upto 128 threads and started staying flat or degrading beyond that.

Node Scalability - Latency - 100% writes


1 0.91
0.9
0.8 0.66
0.7
Latecny (ms)

0.6
0.5 0.41
0.4
0.22 0.21 0.23
0.3 0.16 0.17 0.18 0.17 0.18
0.2
0.1 0.19 0.2
0
4 8 16 32 64 128 256
Threads
1 Node 2 Nodes 4 Nodes

Graph 2. Latency for all-writes workload

© Pure Storage 2016 | 11


As illustrated in Graph 2 throughout the tests, the latency stayed under 1ms. With 1 and 2 nodes, the
latency was steadily under 0.23 across all tests. The latency wavered between 0.1 and 0.91 during the 4
node tests.

Alternatively, the throughput metrics (ops/sec) were plotted against the total threads when run with 1, 2
and 4 nodes. The total threads as marked in x-axis were sum of all threads across the nodes for that run.
For example, 320 total threads is equivalent of 2 nodes running 160 threads each or 4 nodes running 80
threads each. This graph is useful to measure the best possible combination to get the maximum
throughput across all nodes.

Thread Scalability - 100% writes


250000
221520
206328
188072
200000
Throughput (Ops/sec)

161415 184201
168318
182127
150000 123625 143663
122537
100000 79273
115078 111427
106239 100558
71036 88093
41130
50000
39962 67491
38480
0
16 32 64 128 256 320 512
Total Threads
1 Node 2 Nodes 4 nodes

Graph 3. Thread scalability for all-writes workload

In the case of thread scalability, we found that we got the highest throughput of 221,520 ops/sec with 4
nodes each running 128 threads or total of 512 threads. As mentioned earlier, the latency during this test
was still under 1ms.

© Pure Storage 2016 | 12


Mixed Workload (50% read, 50% write)
Simulated mixed workload of MongoDB database operations with 50% read and 50% write.

Node Scalability - 50% read, 50% write


300000
246180 254036 251827
250000 224256
207053 240998
Throughput (Ops/sec)

226428 247511
200000 172078 205101

150000 113155 146096 165946 160311 153988


100000 106841
57510 120534
98573
61211
50000
31364 55776
0 31287
15734
4 8 16 32 64 128 256 320
Threads
1 Node 2 Nodes 4 Nodes

Graph 4. Node scalability for mixed workload

In comparsion to the 100% writes, the mixed workload exhibited higher throughput across all runs as the
writes were reduced by 50% and replaced with 50% of reads. Reads are abundant in the all flash array
world when compared to writes and hence, it reflects directly in terms of latency and throughput
numbers.

As like all-writes workload, the mixed workload also exhibited better scalability results across 1, 2 and 4
nodes. In the Mixed workload, we were able to get 254,036 MongoDB database operations/second with
latency consistenly under 0.25ms with 4 nodes. With single node, the performance steadily went up till
128 threads and started degrading. In case of 4 nodes, there were positive improvements till 256
threads.

In contrast to all-writes workload, the mixed workload displayed much better latency across all runs. The
maximum read latency exhibited was 0.11ms and maximum write latency exhibited was 0.25ms. This is
due to superior architectural design of the Pure Storage FlashArray//m model.

© Pure Storage 2016 | 13


Node Scalability - Latency - 50% read, 50% write
0.3
4N - Write, 0.25
0.25

0.21
Latency (ms) 0.2
0.17
0.15 4N - Read, 0.12

0.11
0.1

0.09
0.05

0
4 8 16 32 64 128 256 320
Threads
1N - Read 2N - Read 4N - Read
1N - Write 2N - Write 4N - Write

Graph 5. Latency during mixed workload

The thread scalability tests doesn’t paint the complete picture in comparison to that of node scalability.
As the maximum throughput of 254,036 that we accomplished was with 256 threads across 4 nodes
which is equivalent of 1024 threads which was not part of the test case. This graph is a different
representation of the scalability numbers in view of total threads.

From the thread scalability tests, we can deduce that with both higher number of threads and nodes we
can achieve higher numbers until we stress either the compute and I/O resources. These tests gives a
view of the performance that can be expected with the varying threads and nodes to identify the right
settings for the desired workload.

Thread Scalability - 50% read, 50% write

300000
246180
224256 229139
250000 247511
Throughput (Ops/Sec)

207053 245326
226428
200000 172078 205101

150000 113155 146096


165946 160311 153988
100000 106841
57510 120534 127028
31739 61211 98573
50000
31364 55776
0 31287
8 16 32 64 128 256 320 512
Total Threads

1 Node 2 Nodes 4 Nodes

Graph 6. Thread scalability of mixed workload

© Pure Storage 2016 | 14


The dashboard shows the consistent performance when mongoperf was run with 4 nodes at 16 threads
with mixed workload.

Figure 3. Pure dashboard illustrating consistent performance

Graph 7 illustrates the comparison of two workloads, all-writes vs read-writes when run across 4 nodes.
As mentioned earlier, the read/write workload exhibits higher throughput at lower latency than that of all-
writes workload as the write operation has been reduced by 50%. Comparatively read is always better in
the all-flash-array world than writes and it is obvious from the results as shown above.

© Pure Storage 2016 | 15


Workload Comparison (100% W vs 50% R/W)
300000 1
246180 254036
0.9
250000 224256
207053 0.91 0.8
Throughput (Ops/sec)

0.7
200000 172078

Latency (ms)
221520 214600
0.6
0.66
188072
150000 113155 0.5
161415
0.4
100000 123625 0.22 0.41 0.3
79273
0.2
50000 0.24
0.23
0.17 0.18 0.17 0.19 0.1
0 0
8 16 32 64 128 256
Threads
50% R/W 100% Write 50% R/W Latency 100% W Latency

Graph 7. Workload comparison

All the tests revealed that the highest throughput were accomplished with higher number of nodes which
is a validation for the scalability aspect that is very critical for NoSQL databases like MongoDB. These
tests also validated the Pure Storage FlashArray//m series’ superior architecture that enables sustained
low latency and higher performance for mixed workloads.

Test Summary

•   The latency was under 1 ms for all the tests on the FlashArray.
•   The read/write workload experienced more throughout than that of 100% write.
•   Increase in threads leads to the host becoming the bottleneck than the storage.

Note

•   The test environment uses iSCSI protocol for connectivity. Higher performance can be obtained
by using FC protocol which is supported by Pure Storage FlashArray.
•   Write performance on EXT4 was very poor and XFS is overall better for MongoDB.

© Pure Storage 2016 | 16


YCSB tests
YCSB is an application level benchmark, meaning it performs the tests on the MongoDB database and
does not simulate MongoDB database operations like mongoperf.

We used the WiredTiger storage engine in our tests with journaling enabled. We also compared the
performance of the YCSB workloads against MMAPv1 storage engine.

YCSB benchmark is a two step process, where the first step is to load the data and the second step is to
run the actual workload.

For the tests, we used 50 million documents and 5 million operations. Documents included 10 fields of
100 bytes plus key (1 KB record). Records were selected using the Zipfian distribution. Results reflect the
optimal number of threads. The optimal threads were determined by increasing the thread count until the
95th and 99th percentile latency values began to increase and the throughput experiencing no increase.

We performed the following three workloads with varying threads to get the right combination to achieve
higher throughput at lower latency.

•   Workload A: Update heavy workload, 50% write and 50% read


•   Workload B: Read heavy workload, 95% read and 5% write
•   Workload C: Read only workload, 100% read

We ran the tests with journaling turned on (default behavior) and off (throughput optimized, no write
acknowledgement).

Throughput

Journaled vs no-journal tests was performed to compare the best possible throughput (throughput
optimized with no-journal but can incur data loss in case of failure) against the balanced settings (good
throughput with minimal data loss in case of failure). Only workloads (A & B) that has updates are tested
and Workload-C was not included as part of this test.

The tests results were surprising as the performance difference with Workload A (50% read, 50% write)
was 7% whereas with Workload B (95% read, 5% write) it was 1%. It is understandable that Workload B
has only 5% of writes but in case of 50% write scenario (Workload A), a degradation of 7% is not very
significant. In fact most users will find the tradeoffs worthwile that favors the default configuration over
some settings that can provide additional 7% throughput but at the cost of the availability of data.

© Pure Storage 2016 | 17


YCSB throughput tests
Journaled vs No Journal (Higher is better)
100000
87952 86962
90000
Throughput (ops/sec) 80000
70000
60000 55436
51306
50000
40000
30000
20000
10000
0
Workload A Workload B

No Journal Journaled

Graph 8. YCSB throughput tests

Latency

More than throughput, latency of the operations is also very important. Since the average latency is not
necessarily the best metric, YCSB provides the latency metrics at 95% and 99% percentiles, where
observed latency is worse than 9% or 99% of all other latencies. As expected the read latencies are
lower than that of write latencies and the read latencies were under 0.5ms.

YCSB, read latency (lower is better)

0.5 0.457
0.45
0.398
0.4
0.345
0.35
Latency (ms)

0.3 0.265
0.246
0.25
0.198
0.2
0.15
0.1
0.05
0
Workload A Workload B Workload C

95th percentile 99th percentile

Graph 9. YCSB read latency

© Pure Storage 2016 | 18


YCSB write latency
0.6
0.501
0.5 0.441
0.417

Latency (ms)
0.4 0.327
0.3

0.2

0.1

0
Workload A Workload B

95th percentile 99th percentile

Graph 10. YCSB write latency

The tests revealed that the write latencies also stayed around or below 0.5ms for both the workloads.
These were clearly due to the advantage of Pure Storage FlashArray which could handily manage the
writes that are pushed out by MongoDB periodically.

Based on our testing, we found the optimized thread settings for YCSB is at 16 threads where the latency
stayed below 1ms consistently and experienced highest throughput. Beyond 16 threads, the latency
started increasing gradually but the throughput stayed flat. To further increase the throughput through
scaling, use sharding.

Threads scalability (Journaled)


120000
99842
100000
THROUGHPUT (OPS/SEC)

86962
80000

60000 51306

40000

20000

0
2 4 8 16 24 32 64

Workload A Workload B Workload C

Graph 11. Threads Scalability

© Pure Storage 2016 | 19


Instant cloning of MongoDB databases
Pure Storage’s FlashRecover Snapshots deliver superior space efficiency, high scalability and simplicity of
volume snapshot management. They are thin provisioned with no dedicated space allocated. Snapshots
preserves data reduction of parent volume through pattern removal, deduplication and compression.

FlashRecover Snapshots are useful for various use cases like:

•   Provisioning non-production environment like DEV, QA


•   Backing up the database/application before applying patches/changes
•   Recover from human error when a user dropped collections or records from the database

Snapshots with Journaling

WiredTiger syncs the buffered log records to disk according to the following intervals or conditions:

•   Every 100 milliseconds.


•   MongoDB sets checkpoints to occur in WiredTiger on user data at an interval of 60 seconds or
when 2 GB of journal data has been written, whichever occurs first.
•   If the write operation includes a write concern of j: true, WiredTiger forces a sync of the
WiredTiger log files.
•   WiredTiger creates a new journal file approximately every 100 MB of data. When WiredTiger
creates a new journal file, WiredTiger syncs the previous journal file.

Note: Any in between writes that are in the buffers can be lost if mongod crashes before the next flush.

To get a consistent copy of the database, journaling should be enabled. Without


journaling, there is no guarantee that the snapshot will be consistent or valid.

With Journal enabled, Pure Storage FlashRecover Snapshots can be used to take crash-consistent, point
in time snapshots of the MongoDB database at the storage layer.

The snapshot can be taken either through the Pure Storage GUI, CLI or rest based APIs.

We will cover the snapshot and cloning of the database through CLI.

1.   Take the snapshot of the volume that is hosting the MongoDB database and journal. If the journal
is in a different volume, it is recommended to create a Protection group with both datafile and
journal volumes.

© Pure Storage 2016 | 20


purevol snap <volume name>

In the example below, we are listing the details of the volume and taking the snapshot of the
volume which will display the snapshot name that is required for Step 2.

pureuser@pure-m50-lab> purevol list fs_mdb1_data


Name Size Source Created Serial
fs_mdb1_data 1T - 2016-02-05 10:42:22 PST 5A9D680697F2455C0001103F
pureuser@pure-m50-lab> purevol snap fs_mdb1_data
Name Size Source Created Serial
fs_mdb1_data.4164 1T fs_mdb1_data 2016-02-16 17:43:52 PST 5A9D680697F2455C00011044

2.   Instantiate the snapshot by copying it to a volume that will be connected to destination host.

purevol copy <snapshot> <new volume name>

In the following example, we are instantiating the snapshot we took in Step 1 to a new destination
volume named fs_mdb1_dev.

pureuser@pure-m50-lab> purevol copy fs_mdb1_data.4164 fs_mdb1_dev


Name Size Source Created Serial
fs_mdb1_dev 1T fs_mdb1_data 2016-02-16 17:43:52 PST 5A9D680697F2455C00011045

3.   Attach the volume to the destination host using purehost command

purehost connect --vol <volume name> <Hostname>

In the example below, we are connecting the snapshotted volume (fs_mdb1_dev) to the
destination host mongodb2.

pureuser@pure-m50-lab> purehost connect --vol fs_mdb1_dev mongodb2


Name Vol LUN
mongodb2 fs_mdb1_dev 2

4.   Scan for the volume that is attached to the destination host using the following Linux command
and mount it. Note: If you are using alias with device multipathing configuration, please update
the /etc/multipath.conf file with the new volume information and the alias. If not, it will show the
device mapper name with the WWID number. In the example below, we have updated the
/etc/multipath.conf file with the alias name of fs_mdb1_dev.

root # rescan-scsi-bus.sh

[root@mongodb2 ~]# rescan-scsi-bus.sh


..
..
8 new or changed device(s) found.
0 remapped or resized device(s) found.
0 device(s) removed.

[root@mongodb2 ~]# ls -ltr /dev/mapper/fs_mdb1*


lrwxrwxrwx 1 root root 7 Feb 16 18:11 /dev/mapper/fs_mdb1_dev -> ../dm-9
lrwxrwxrwx 1 root root 8 Feb 16 18:11 /dev/mapper/fs_mdb1_dev1 -> ../dm-10

© Pure Storage 2016 | 21


[root@mongodb2 ~]# mkdir /m01
[root@mongodb2 ~]# mount -t xfs -o nobarrier,noatime /dev/mapper/fs_mdb1_dev1 /m01
[root@mongodb2 ~]# df –h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fs_mdb1_dev1 1.0T 105G 920G 11% /m01

5.   Start the database using mongod command.

[root@mongodb2 ~]# mongod --dbpath /m01/data --logpath /m01/log/mongod.log --fork


about to fork child process, waiting until server is ready for connections.
forked process: 33801
child process started successfully, parent exiting

© Pure Storage 2016 | 22


Best Practices for MongoDB on Pure Storage
Linux Settings
Filesystems

We recommend using XFS filesystem for MongoDB databases (for both MMAPv1 and WiredTiger storage
engine) over other filesystems. Based on our internal testing we have seen better performance numbers
using XFS than EXT4.

Disable access time settings through mount option. Most file systems will maintain metadata for the last
time a file was accessed. This may be useful for some applications, in a database it means that the file
system will issue a write every time the database accesses a page which will negatively impact the
performance.

Use noatime option with mount to disable the access time settings.

Read-Ahead settings

Set the readahead block size to 32 (16KB) or the size of the most documents, whichever is larger.

If the readhead size is much larger than the size of the data requested, a larger block will be read from
disk which is not ideal for performance and the memory usage.

Use blockdev –-setra <value> command to set the read ahead value.

Disable Transparent Huge Pages

MongoDB workloads performs poorly with THP because they tend to have sparse rather than contiguous
memory access patterns. It is a best practice to disable transparent huge pages for MongoDB by
performing the following commands.

echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag


echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled

TRIM/UNMAP

When files are deleted in Unix, the Operating System marks the blocks that are used by the files as free
inside the file system index but it doesn’t inform the SSD about it. This means, the SSD storage cannot
claim that space until it is informed of the action by the O.S. TRIM is the name of the command that the
O.S can send to inform SSD of all the blocks that are free in the file system.

TRIM is enabled at the file system level using mount option named “discard”. This informs SSD in real
time the list of blocks that are were made free when a file is deleted, which allows SSD to perform

© Pure Storage 2016 | 23


defragmentation and deletion of those internal blocks. The “discard” option is enabled on the BTRFS,
EXT3, EXT4, JFS, XFS filesystems.

The MMAPv1 storage engine of MongoDB uses memory mapped files to map the data files to a region of
virtual memory. By using memory mapped files, MongoDB can treat the contents of its data files as if they
were in memory. Due to the architectural implementation of MMAPv1, the files would be saved in chunks
of 2GB in size. Hence a multi-terabyte MongoDB database using MMAPv1 storage engine can have
numerous memory mapped files.

We recommend not to use discard option at the mount point to achieve TRIM, rather use the “fstrim”
command on demand or on-schedule. Once a day or week should be more than sufficient to invoke
fstrim to reclaim the freed-up space as well as not to incur any performance overhead.

Invoke fstrim command with the mount point as the argument

[root@mongodb1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fs_m50_mdb1_mperf1 1.0T 529G 495G 52% /p02

[root@mongodb1 ~]# fstrim /p02


[root@mongodb1 ~]#

Database Cloning

To use PureStorage FlashRecover Snapshots to take snapshots of the MongoDB database, place the
database files and journal on the same volume. As the performance on Pure Storage is not based on the
volume/lun count, it is perfectly fine to place both the datafile files and journals on the same volume.

If multiple MongoDB databases are hosted on the same host and if the databases are candidates for
database cloning, place the databases on its own volume/lun.

Enable Journaling for MongoDB databases. To get a consistent copy of the database, journaling should
be enabled. Without journaling, there is no guarantee that the snapshot will be consistent or valid.

© Pure Storage 2016 | 24


Test Environment
Server Configuration

One chassis with 4 identical Intel CPU based SuperMicro servers (product name SYS-F618R2-RTPT+)
were used for the MongoDB Scalability and Consistent performance testing. Red Hat Enterprise Linux 7.2
was installed on the local disk.

Figure 4. Test environment

Component Description

2 x Intel Xeon E5-2670 V3 2.3 GHz (2 CPUs with 12 cores


Processor
each)

Memory 128GB @ 2.1GHz (8 x 16GB)

2 x 10G network Ports


Connectivity
2 x 1G network ports

© Pure Storage 2016 | 25


Operating System Red Hat Enterprise Linux 7.2 (Maipo)
Table 2. Server Configuration

FlashArray Configuration
The FlashArray//m50 configuration comprised of two active/active controllers and the base chassis
included 20TB of raw SSD storage for a total of 11.17 TB usable storage. 4 x 10GbE network interfaces
from each controller were connected to the dual Cisco 9K switch in a highly redundant configuration.

Note: There are no special configuration or settings that were made on the array. There are no
performance knobs to tune the FlashArray either.

Component Description

FlashArray //m50

20TB raw (base chassis)


Capacity
11.17 TB usable

4 x 10 Gb/s redundant Ethernet ports


Connectivity
1 Gb/s redundant Ethernet (Management port)

3U
Physical
5.12” x 18.94” x 29.72” FlashArray//m chassis

O.S Version Purity 4.5.6


Table 3: FlashArray Configuration

Connectivity
The servers were connected over iSCSI protocol to the storage array through Cisco 9K switch. To
improve the bandwidth and enable redundancy to the server, the two 10G interfaces were bonded
together using load balancing (round-robin) mode. See Appendix – A for the bonding settings that was
used in our Linux hosts.

© Pure Storage 2016 | 26


Summary
Pure Storage transforms MongoDB environments by reducing the storage complexity while gaining
performance, resiliency and efficiency. The FlashArray’s industry leading data reduction functionality,
coupled with features like snapshots and cloning and lowered storage costs up to 50%, Pure Storage is
the most viable solution.

•   Scalable and Consistent performance at sub-millisecond latency


•   Instant database cloning enables Agile DevOps
•   High Availability with active/active controller and security across the storage using always-on-
encryption
•   Industry leading data reduction

© Pure Storage 2016 | 27


Appendix-A Network bond settings
Following were the network bond settings that is used in our environment with iSCSI protocol enabled for
connectivity between the host and the storage.

$cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NAME=bond0
IPV6INIT=no
IPADDR=192.168.1.71
NETMASK=255.255.255.0
BONDING_OPTS="miimon=100 mode=0"
NM_CONTROLLED=no

$cat /etc/sysconfig/network-scripts/ifcfg-ens7f0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV6INIT=no
NAME=bond0-slave0
DEVICE=ens7f0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no

$cat /etc/sysconfig/network-scripts/ifcfg-ens7f1

TYPE=Ethernet
BOOTPROTO=none
IPV6INIT=no
#NAME=ens7f1
NAME=bond0-slave1
DEVICE=ens7f1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no

© Pure Storage 2016 | 28


References
Following documents and links were referenced in preparing this document.

1.   Mongo DB 3.2 manual


https://docs.mongodb.org/manual/

2.   Yahoo! Cloud Serving Benchmark (YCSB)


https://github.com/brianfrankcooper/YCSB/wiki

3.   RedHat Support site


https://access.redhat.com/support

© Pure Storage 2016 | 29


About the Author
Somu Rajarathinam is the Solutions Architect at Pure Storage responsible for
defining the ideal database solution architecture for Pure products as well as
its customers, performing benchmarks and preparing reference architecture
for Database systems on Pure.

Somu has over 20 years of experience with Oracle Databases and


specialized on performance tuning dating back to the days with Oracle
Corporation where he was part of the Systems Performance Group (SPG) and
later with Oracle Applications Performance Group. During his career with
Oracle Corporation, Logitech, Inspirage and Autodesk he wore multiple hats
ranging from providing Database and Performance Solutions to managing
Infrastructure, Database and Applications support hosted in-house and over Cloud platforms.

Twitter: @purelydb

© Pure Storage 2016 | 30


.

Pure Storage, Inc.


Twitter: @purestorage
www.purestorage.com

650 Castro Street, Suite #260


Mountain View, CA 94041

T: 650-290-6088
F: 650-625-9667

Sales: sales@purestorage.com
Support: support@purestorage.com
Media: pr@purestorage.com
General: info@purestorage.com

© Pure Storage 2016 | 31

You might also like