You are on page 1of 43

Overview

The Application Modernization Scorecard can be used to evaluate the suitability of MongoDB for both new applications and application modernizations.
It scores MongoDB and other databases against multiple criteria in each of the following categories of application requirements:

- Data modeling
- Query requirements
- Performance & scalability
- Availability & disaster recovery
- Operational management
- Deployment model & TCO

How to use the MongoDB Modernization Scorecard


1. The user creates a copy of the scorecard spreadsheet for each application being assessed.

2. The user fills in the application details:


- Application Name
- Description
- Application Owner
- Alternative DB
3. The user selects a WEIGHT for each of the criteria defined in the scorecard according to importance to the application.
4. The user fills out the comments as it applies to the appliction being assessed for both databases
5. MongoDB Solutions Architect or MongoDB Professional Services Consultant enters relevent rating

6. The user assesses MongoDB and the alternative being compared, with the help of a MongoDB Solutions Architect or MongoDB Professional Services Consultant

Scoring
Scores are calculated for each database by multiplying the WEIGHT (importance) against the RATING (suitability) for each of the specified application requirements. The
overall database score is the sum of the individual application requirement scores, to indicate which of the databases is more suitable for the application being
considered

Metrics
WEIGHT
0: The requirement is not applicable for the application
1: The requirement is useful, but not critical for the success of the application
2: The requirement is an important capability for the application now, or in the future
3: The requirement is a critical capability for the application
RATING
0 : The database does not satisfy the application requirement, or only does so with significant application re-engineering
1: The database can partially satisfy the application requirement, but only with reduced functionality or with some application re-engineering effort
2: The database fully satisfies the application requirement, but with some non-functional impact to the application (e.g. performance)
3: The database fully satisfies the application requirement
Web scale data for modern applications
What are some of the things you wish you could do with your technology that you can’t do currently? What do you think it would take to get there?
Tell me about a time that you couldn’t evolve an existing app to keep up with the requirements. What prevented you from iterating?
Data Complexity & Variety (how complex is the info architecture and how many different complex shapes)
What limits are you hitting with your RDBMS? Where is the bottleneck if you reached one?
Workload (amount of reads and/or writes of what size data per second, varying how over a 24 hour period)
Bulk Data Import/Export performance requirements
Do you expect major growth in the future? In other words, what kind of scale requirements do we expect?

Programmable
How would you describe the resilience level of your current systemsInfrastructure / Cognitive
of record and mainframe Automation
systems? Help me understand the types of operational iss
you’ve had with them in the past 12 months. What impact did they have?
If you had the opportunity to define them, what key capabilities would you add to the way you ensure systems are available and the business continues to
Why would those be important?
What are your biggest risk factors in maintaining uptime and meeting your SLAs?
What are the impacts to your business of your current systems going down or failing to meet performance SLAs? Tell me about the last time that happen
How would you rate your ability to respond to requests from the business? How quickly can your teams prototype new apps or iterate on existing apps?
Tell me about how development and ops work together to bring an app to production.
Do you have the need to support infrastructure in private cloud as well as public cloud for few application - if so, how?
Do you have the need to support infrastructure in multiple public cloud service provides for few application - if so, how?
What are the driving factors for the need to connect to public cloud from your private data center?

If you had the opportunity to define them, what capabilities are you missing Security
today that you would need to ensure the security of your environment and dat
Why would those be important?
What is the potential impact of failing to have those capabilities in place?
What would the impact be of a data breach, whether it was of data in flight or data at rest?
What
What type(s)
securityofand/or
data exposure
redactionare you comfortable
strategies withininternally?
to you have How important
place for applications is encryption
accessing of data
the data in theregardless
mainframe?of where
How isitfiltering
is? Why?handled (e.g. person
information)? What is the best way to handle it in your opinion?
How do you currently manage role-based access to that data?
How important is it to connect to your private LDAP from within the cloud services?

Total cost of Ownership


How
Whatmany MIPS
are the do youtocurrently
strategies have provisioned
manage/control for your
the increasing mainframe
MIPS in total?
and related costs? How effective were these strategies so far? What else do you already
planned?
What is the impact of failing to implement these cost controls by <DATE>? How does achieving (or missing) that date impact you personally?
The high cost of supporting traditional Relational databases can impact a company’s margin and limit its’ ability to invest in innovation. Describe how redu
operational costs could be used in your organization to improve profitability and/or invest in innovation.
What are the top 3 factors driving the cost to run your existing applications and data platforms today?
How long are your typical development cycles? What percentage of projects are delivered on time, on budget and on spec?

Legacy
A mainframe’s business criticality makes it subject to very tight & Mainframe
Change Managementmodernization
restrictions. Tell me about the data elements you would like to store
you
Timely access to data for all teams is essential to explore new business opportunities. What How
currently can’t keep because data models haven’t kept up with business requirements? typesare you overcoming
of data those
are stored on issues?
the mainframe today? How is tha
used in support of the business? How easy is it for your teams to access data in the mainframe for experimental purposes and other use cases?
How
How easy is organization
is your it to build new applications
looking to use that make use of the data
social/geospatial/etc mainframe
to gaindata?
additional customer insights on DB2? How rapidly are you able to build new
applications to use this data?
Who are your top competitors, and what innovations are they releasing to market that cause you the greatest concern?
What percentage of your time is dedicated to innovation vs. “keep the lights on” projects?
What are the most innovative apps that you’re focused on today?
MongoDB Atlas
Cloud native databases MongoDB Atlas Postgres
Yes Postgres version differs in each cloud service
70+ regions on Azure, AWS, and GCP. provider, Postgres on AWS Aurora is different
MongoDB can also be downloaded and run on Postgres on RDS, which is different from Azure,
self-managed infrastructure on-premise or in and from GCP
Freedom from platform lock-in the cloud

No native sharding to scale out writes or


Auto-scale in response to application databases. 3rd party solutions trade away core
demand Yes RDBMS features (ACID, Foreign Keys, JOINs)

Fully compatible with MongoDB API Yes NA

Yes
Snapshot isolation with all or nothing execution
Ad hoc, conversational multi-document Distributed transactions across partitions
ACID transactions across partitions (shards) Full transaction support

BSON (Binary JSON) Tables, rows and columns look nothing like
JSON data type support Basic JSON + longs, doubles, floats, decimal, objects in code. Schema modifications impact
dates, and times app performance and availability
Maximum JSON document size NA
Unlimited Query performance in MongoDB is far higher,
Maximum Query Execution time 60 seconds for a transaction (tunable) however these are two different query engines
Requires proprietary SQL extensions, primitive
JSON schema for data governance Yes JSON only, no schema governance, black box:
controls Schema controls enforced in the database no statistics for query optimization
Yes
Native text indexes and Lucene indexes with
Atlas Search
Integrated text search NA

Integrated graph traversals from the


MongoDB API Yes NA

Yes
Materialized Views On-demand materialized views Yes
Yes
UDFs Custom Aggregation Expressions Yes
Cloud deployment models
Yes: Atlas Global Clusters
Active-active, write everywhere with each
region mastering its own partitions, eliminating
data loss and maintaining strong data No built in failover. Requires 3rd party clustering
Multi-region cluster consistency with 30-180 second failover

No
Hash sharding only, no way to refine shard No native sharding to scale out writes or
key, maximum shard size of 10GB drives rapid databases. 3rd party solutions trade away core
Sharding Flexibility cost escalation even for smaller data sets RDBMS features (ACID, Foreign Keys, JOINs)
Cloud provider services

Real-time performance tracking and historical


monitoring across over 100 metrics. Alerting
Monitoring and optimization tools included. Yes

Continuous backup with on-demand


restore Yes No

Partial
Encryption of data in-motion, at rest, and No field-level encryption for protecting data in-
in use at field-level use. Plaintext can be read from memory. Yes
Granular role-based access control Yes: Atlas Data Lake
Yes Yes
Query via MQL and SQL, supports MongoDB
Embeddable
Integrated Datadatabase
Lake forforlong
mobile devices Yes:
running MongoDB
data and varietyRealm
of external formats, including No
analytics queries Avro, Parquet, etc NA
Access to MongoDB expertise: hundreds Access to and upgrade between all supported
of engineers with multi-year MongoDB versions
Upgrade path support,
development, between and
different releases
consulting v3.6, v4.0, v4.2, v4.4 Yes
Yes
experience
Single bill for CSP, including all CSP Yes
MongoDB Atlas can be purchased via the Yes
Different versions, different tools from on-prem to
services and the database Azure Marketplace cloud

Monthly Cost $4,343


CosmosDB DynamoB

No
DynamoDB is based on a proprietary code base.
Developers are not able to access the underlying
No source code.

Yes
Autoscaled RU/s are 50% more expensive Partial support
Key-value queries only. For more complicated
queries, data must be copied into additional AWS
No: Incomplete technologies, which adds latency, cost, and
Imitation API. Fails 60+% of correctness tests. complexity.
Most closely resembles MongoDB in 2016

Eventually consistent by default; Consistent reads


No are double the cost. There is also no guarantee of
Stored procedures only, cannot be executed via consistency for reads that rely on certain types of
the MongoDB API indexes. Reads leveraging global tables are
No cross-partition transactions eventually consistent.

JSON-formatted documents with limited support


for data types. Indexing only on top level JSON
keys.
No support for dates and support for only 1
numeric type. Types must be preserved on the
Partial BSON support client, which adds app complexity and limits
Some data types cannot be used or modified database re-use.
2MB 400 KB record size limit.
Indexes are sized and provisioned separately
from underlying data. Each index is a materialized
view and an additional cost. Writes require
updates to table indexes, which also incur
DynamoDB write units.

DynamoDB offers Global Secondary Indexes or


Local Secondary Indexes: hash or hash-range
only. Index keys can only be String, Number,
30 seconds Binary data types
5 seconds for a single operation, ie a transaction
No
Schema controls must be written in the
application No native support for data governance.

No
Requires integration with separate search engine No
No
Requires separate Gremlin API, adds cost and
dev complexity. No support for the $graphLookup
stage. No
Indexes are materialized views that are an
additional cost. Hash or hash-range only; index
keys can only be String, Number, Binary data
No types

Not for the MongoDB emulated API Yes

Yes: Multi-Master
Conflict resolution can result in data loss,
eventual data consistency by default No

Yes
Shard by range, hash, zone (Atlas Global
Clusters). Refinable shard keys (4.4), shards can
be multi-TB No
Only 15 DynamoDB metrics are reported by AWS
Yes CloudWatch

On-demand and continuous backups available


but many configuration settings need to be
recreated on tables; adds pricing complexity

DynamoDB offers both on-demand and


continuous backups. However, many
configuration settings (IAM policies, cloudwatch
metrics, etc) need to be restored manually on the
tables.

Users pay for both backing up data and restoring


data, at different rates depending on whether
No they’re using on-demand on continuous backup,
Snapshots taken every 4 hours, restore via which adds pricing complexity. No ability query
service ticket to Azure engineers (PiT restore snapshots to restore granular data quickly.
announced at Build2020 but not yet available)

Encryption at rest available for new tables; adds


cost and pricing complexity
Encryption at rest available only for new tables;
not available for DynamoDB streams, which are
used for change data capture and global
replication. Partition and sort keys are also not
encrypted.

While enabling encryption at rest is offered at no


additional charge, customers are charged for all
Yes
Partial calls to AWS Key Management Service,
With field-level
Azure Synapseencryption,
Link (still inMongoDB engineers
preview) syncs data increasing cost and complexity
have
No ano
into accessanalytical
separate to plaintext.
store within Cosmos DB
that can beorqueried
Read-only via Azure
full database Synapse Analytics.
access No
This duplicates data, increases cost, and requires Patrial
No
the use of separate APIs from transactional No
AWS Athena & RedShift have different SQL
workloads. query engines based on Pristo and Postgres.
No
Customer must export existing data, then load to 30,000 database operations per
new cluster, and ensure they create the second, with a 50/50 key-value
appropriate indexes read / write pattern
2KB document size
No No 1TB of storage
Yes Yes Database cluster provisioned to
US West Azure Region
1 month of provisioned capacity
$10,105 $9,585 = 730 hours
Skylab - Agroclimatic Station - MongoDB Modernization Scorecard
Application Name Skylab - Agroclimatic Station
Description The AgroClimatic station is a system that collects and processes data related to weather and climate conditions for agricultural
App Owner Name and Contact Info IonicSmart | duvan.mejia@ionic.io
Alternative DB Name Postgres

Application Requirement Description Weight


Data Model
Accommodate frequent application Evolving application requirements demand frequent schema changes. Critical 3
changes
Enable faster feature development and Support developer ease-of-use and productivity with a data model aligned to Needed 2
iteration
Store multi-structured data the
Thestructure
application of objects
needs to inmanage
modern more programming
than justlanguages.
structured, tabular data. Critical 3
Store JSON data The data
Ability structures
to natively are index,
store, becoming query lessandconsistent,
manipulate and increasingly
JSON data. each Critical 3
Store binary data record
Ability toneeds
storetobinary
store data
different
suchattributes.
as images, documents, movie files, etc. in Not Used 0
Ensure the application data can, by the database,
Strong without
consistency. relying
Ability foron
theanapplication
external filesystems.
to read its own writes. Critical 3
default, read the
Transactional most recent
guarantees perversion
object /of Enforcement of ACID guarantees for database operations on data in a single Critical 3
the data
record
Transactional guarantees across document.
Enforcement of ACID guarantees for database operations that span multiple Critical 3
multiple records
Foreign Key controls objects/documents.
Application Note: intermediate
require foreign key enforcement, document data data
to restrict modeling
beingisadded
likely Critical 3
Data governance controls necessary
without valid
Enforce to foreign
determine
structure keys.
of data if stored
multiple in records
records,can be represented
including presence by one data
of fields, Critical 3
document.
types, data Consult
formats,MongoDB
and architects
permissible for guidance
values. if assistance is by
Centralized data governance controls Enforce a centralized schema with versioning that cannot be modified Critical 3
across
Use multipleas
database schema
Operational Data developers
The (Note:
application this the
utilise approach
database is increasingly regarded asresponse
to improve application an anti-pattern
time Needed 2
Store
Main data store in agile
(order development methodologies).
Permanent persistent data is stored as the primary data storage solution in
of milliseconds) by accessing the data from a cached copy stored on Critical 3
Store time series data the database.
TheDatabase.
application store duplication information based on the time series Not Used 0
Use data in a catalog application related
The to the entry
application in therecords
present database.to the Eg.userStock
withTickers
attachedor monitoring
tags and allow data Not Used 0
Use data to build graph networks over
The time.
usersapplication
to search use on any
the combination
database to of tags
build to narrow
graph downwith
structures theirnodes
search which Needed 2
Data is used in hierarchical results.streams
contain
Event edges andfromproperties in the stores
various servers are pusheddata. into the database, and Useful 1
Aggregation
Unify segmented data sets aggregated over a multitude
Multiple applications and dataofstores
time spans. Pre-aggregated
have information aboutdata will speed
a specific data Needed 2
Frequent 1:1 relationships up query
entry, time when
businessnavigating
require application
a single front
unified ends.
Data is modelled in 1:1 relationships and is accessed together on a entity.
but the business view of the regular Useful 1
Frequent 1:Few relationships basis.
Data is modelled in 1:many relationship with a limited or small set of values. Useful 1
Frequent 1:very many relationships Data is modelled in 1:many relationship with a large number or unlimited Useful 1
Query and combine data with multiple number
Data withofMany:Many
related documents.
relationships. Requirement to query entities where Useful 1
potential relationships there are multiple relationships between different attributes, or perform ad-
hoc queries that need to combine previously unrelated entities. Totals 129

Query Requirem
Key-Value access Data is always accessed using a unique identifier called a key. Needed 2
Range access Range identifiers allow access to multiple documents that satisfy the Needed 2
specified query.
Aggregation Data can be grouped and manipulated to yield calculated properties or Needed 2
Geospatial access statistical calculations,
Application allow queries in real time.
related to the geospatial coordinates with geometry Needed 2
Text search specifiers
Data can be likeaccessed
box, sphere and
using center.
free text search with in the contents of the data. Needed 2
Graph lookup Access data according to graph network derived from data nodes with Needed 2
Restrict data responses defined edges and
Allow application toproperties.
specify the data required to limit the response size or Needed 2
Control access to sensitive data restrict access
Restrict accesstotodata.
subsets of data to applications as part of a larger data set. Needed 2
Flexible access paths to the data Fast access to data by any attribute, enabled by secondary indexes. Useful 1
Developer ease-of-use for fast query Availability of client drivers that implement the methods and functions of Critical 3
development
Query across hierarchical and native programming
Recursive languages.
JOIN (equivalent to the Oracle CONNECT_BY condition). Useful 1
connected
Support fordata structures
Querying to build
the Archived Requirement for firing query data in S3 bucket / Run federated query on the Useful
relationship
Data of entities
Application relies on centralized logic archived data in S3 triggers and User Defined Functions. If an existing
Stored procedures, Critical 3
that is stored
Generate and executed
reports, dashboardsin the
and application
Expose the relies on database-side
database as an ODBC data code,source
a rewrite
for of the database-side
querying logic
with SQL-based Critical 3
database
visualisations from the application's will be required
BI and analyticswhen
tools.migrating to MongoDB.
data Totals 81

Performance & S
Lowest latency database operations for Read and write data with consistently low response times, even at scale with Critical 3
best customer
Affordably experiencegrowing data Scale
accommodate no requirement for external
the database cache tools.hardware on-prem or in the cloud as
across commodity Critical 3
volumes, expanding
Distribute data acrossuser base
multiple the application grows,
Geographically while
distribute themaintaining full database
database across regionsfunctionality, anddata
to distribute the Useful 1
geographic
Localise dataareas for localised
for faster reads
reads across without
to imposing
designated
Geographically application
locations forthe
distribute changes.
global readsacross
database and writes.
regions to co-locate data Useful 1
and writes locations
geographical
Caching close
Cachingto users access to the
latest/frequently global data
accessed set through
or inserted data replication. Critical 0
Automatically distribute sharded data Distributing data across a multitude of shards automatically with no Needed 2
Scale data storage past hardware downtime todata
Distributing expand datamultiple
across storage.
shards horizontal scaling on commodity Critical 3
limitations hardware at lower cost.
Totals 39

Availability & Disaster


Enforce data durability to protect Safely persist data to storage that will survive hardware and software failures Critical 3
against data
Maintain losscontinuity
service in the event of
(high affecting multiple
Maintain single nodes, racks,
replicas and
of the complete
data acrossdata centers.
a cluster of nodes within and Critical 3
multiple failure
availability)
Recover in in
data scenarios
the
theevent of of
event hardware
local or across datacenters,
Database backup withwith
thethe ability
ability to to automatically
quickly failover
recover state inspecific
to a the event of an
point in Critical 3
softwareor
disaster failures that can impact
data corruption outage. Uses rolling restarts to maintain continuity during upgrades and
time. Totals 27
complete data centers, or during maintenance.
Security
Encryption at rest Encrypt data that is stored in the database at the point when it is written to Critical 3
Encryption of communication disk, for regulatory
TLS/SSL encryptionpurposes.
for communication with the database, securing data Critical 3
Client-Side Field Level Encryption over thecapable
Drivers wire. of encrypting data prior to transmitting data over the wire to Needed 2
the server. Only applications with access to the correct encryption keys can
decrypt and read the protected data. Deleting an encryption key renders all
data encrypted using that key as permanently unreadable.
Secure authorization Provide a secure method of authorizing a user for access to the database. Needed 2
Access Controls Eg. SCRAM-SHA1
Enforce fine grainedoraccess
x509. controls to ensure data security and to limit Needed 2
Audit logs for database operations access toaudit
Maintain sensitive
log ofdata, that comply
all database to regulatory
operations requirements.
for forensic analysis and Needed 2
Enterprise access management regulatory purposes.
Allow the use of Enterprise Access management to be used when interacting Needed 2
with the database. Eg. LDAP & Kerberos.
Totals 48

Platform
Number of applications built on this DB. Does each applications has its own Critical 3
dedicated
Number ofDB / Schema.
developers building Applications on this Platform and how they Useful 1
are distributed geographically.
Number of Operational Support staff supporting this Platform, broken down Needed 2
Product Support by QA, Production
Product support andandexpectation
any other environments,
with respect toand how they are distributed
SLA’s. Needed 2
System agility geographically.
Application agility is the ability to maintain your systems in response to Needed 2
Data as Service market
Are realities.
there any plans to expose the data via REST APIs? Is this partly to fulfil Critical 3
Single View of Customer initiaitives?
Enabling a single view of a customer (or are being currently developed or are Needed 2
desired for the future)?
Totals 45

Operational Manag
Native service automation Automation tools to deploy and manage database clusters. Critical 3
Service & Hardware Monitoring Tools to monitor database and hardware performance. Critical 3
Automated Backup solutions Automated database backup and restore functionality with point-in-time Critical 3
Technology Stack Integration of recovery. integration with existing tech stack configuration management
Automation Critical 3
Automation
Graphical User Interface for Data and orchestration
GUI based tool to tools, through
explore data inAPI.
the database, analyze data structures and Not Used 0
Discovery
Global oversight of complete IT identifythat
Issues slowrisk
operations.
affecting customer experience can be quickly identified and Critical 3
infrastructure from a single isolated to specific components – whether attributable to devices, hardware
management UI. infrastructure, networks, APIs, application code, databases and more. Totals 45
Enabled by integration with APM platforms.
Deployment Mo
Cloud deployable Take advantage of cloud architecture for elastic, multi-region scale-out. Needed 2
Multi-Cloud deployment Supportafor
Deploy publicacross
cluster and private cloud
multiple platforms.
cloud providers (e.g. AWS, Microsoft Azure, Useful 1
Consume the database as a service Google Cloud) using the same interface.
Reduce operational overhead by connecting the application to a database Critical 3
hosted in the cloud
Self hostable service
Host thewith operations
database run bydata
in private the centers
databaseforprovider.
control over data. Critical 3
Workload Isolation Run operational and analytical workloads in the same cluster. Critical 3
Support for hybrid cloud environment DB deployed in a environment that uses a mix of on-premises, private cloud Critical 3
and third-party, public cloud services with orchestration between the two
platforms.
Totals 45
ate conditions for agricultural purposes

MongoDB Postgres
Rating Score Comments Rating Score
Data Model
3 9 1 3
3 6 1 2
3 9 1 3
3 9 1 3
3 0 1 0
3 9 1 3
3 9 1 3
3 9 1 3
3 9 1 3
3 9 1 3
3 9 1 3
3 6 1 2
3 9 1 3
3 0 1 0
3 0 1 0
3 6 1 2
3 3 1 1
3 6 1 2
3 3 1 1
3 3 1 1
3 3 1 1
3 3 1 1

129 43

Query Requirements
3 6 2 4
3 6 2 4
3 6 1 2
3 6 1 2
3 6 1 2
3 6 1 2
3 6 1 2
3 6 1 2
3 3 1 1
3 9 1 3
3 3 1 1
3 1
2 6 1 3
3 9 1 3

78 31

Performance & Scale


3 9 1 3
3 9 1 3
3 3 1 1
3 3 1 1
3 0 1 0
3 6 1 2
3 9 1 3

39 13

vailability & Disaster Recovery


3 9 1 3
3 9 1 3
3 9 1 3
27 9

Security
3 9 1 3
2 6 1 3
3 6 1 2
3 6 1 2
2 4 1 2
3 6 1 2
3 6 1 2

43 16

Platform
3 9 1 3
3 3 1 1
3 6 1 2
3 6 1 2
3 6 1 2
2 6 1 3
2 4 1 2

40 15

Operational Management
3 9 1 3
3 9 1 3
3 9 1 3
3 9 1 3
3 0 1 0
3 9 1 3

45 15

Deployment Model
3 6 1 2
3 3 1 1
3 9 1 3
2 6 1 3
3 9 1 3
3 9 1 3
3
42 15
Postgres DB2 DB3
Comments Rating Score Comments Rating
D
2 6 3
1 2 1
1 3 2
0 0 3
0 0 3
1 3 3
0 0 0
0 0 0
1 3 1
1 3 1
0 0 0
2 4 3
2 6 2
3 0 3
1 0 2
1 2 0
1 1 3
2 4 3
3 3 2
1 1 3
1 1 3
2 2 3

44

Query
3 12 3
1 4 3
1 2 2
0 0 0
0 0 1
0 0 0
2 4 3
2 4 3
0 0 3
1 3 2
0 0 3
2 2
1 3 3
2 6 2

38

Perfor
2 6 2
1 3 3
1 1 1
3 3 3
2 3
1 2 2
1 3 1

18

Availability
2 6 2
2 6 1
1 3 0
15

0 0 3
1 3 2
0 0 2
3 6 3
3 6 1
2 4 1
0 0 3

19

2 6 3
1 1 1
1 2 3
2 4 3
1 2 2
1 3 3
2 4 2

22

Operatio
0 0 3
2 6 3
1 3 2
1 3 1
2 0 2
2 6 2

18

Deplo
1 2 1
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
3
2
DB3 DB4 DB5
Score Comments Rating Score Comments Rating Score Comments
Data Model
18 1 18 1 18
2 1 2 1 2
6 1 6 2 12
0 1 0 1 0
0 2 0 2 0
9 3 27 3 81
0 3 0 3 0
0 3 0 3 0
3 3 9 3 27
3 3 9 3 27
0 3 0 3 0
12 2 24 2 48
12 2 24 2 48
0 1 0 1 0
0 3 0 3 0
0 2 0 2 0
3 2 6 2 12
12 3 36 0 0
6 1 6 1 6
3 2 6 2 12
3 2 6 2 12
6 1 6 1 6

98 185 311

Query Requirements
36 1 36 1 36
12 2 24 2 48
4 2 8 2 16
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
12 3 36 3 108
12 3 36 3 108
0 3 0 3 0
6 1 6 1 6
0 1 0 1 0
1 1
9 3 27 3 81
12 3 36 3 108

103 209 511

Performance & Scale


12 1 12 1 12
9 2 18 1 18
1 1 1 0 0
9 2 18 2 36
2 1
4 1 4 0 0
3 1 3 0 0

38 56 66

Availability & Disaster Recovery


12 1 12 0 0
6 2 12 1 12
0 2 0 1 0
18 24 12

Security
0 3 0 0 0
6 3 18 1 18
0 0 0 0 0
18 3 54 3 162
6 3 18 3 54
4 2 8 2 16
0 2 0 2 0

34 98 250

Platform
18 2 36 2 72
1 2 2 2 4
6 3 18 3 54
12 3 36 3 108
4 2 8 2 16
9 1 9 1 9
8 1 8 1 8

58 117 271

Operational Management
0 3 0 2 0
18 2 36 2 72
6 3 18 2 36
3 2 6 2 12
0 2 0 1 0
12 2 24 2 48

39 84 168

Deployment Model
2 3 6 1 6
0 1 0 0 0
0 3 0 0 0
0 3 0 3 0
0 1 0 1 0
0 2 0 2 0
4
2 6 6
DB6 DB7
Rating Score Comments Rating Score
Data Model
1 18 3 54
0 0 3 0
2 24 3 72
1 0 3 0
1 0 3 0
2 162 3 486
3 0 3 0
3 0 3 0
3 81 3 243
3 81 3 243
3 0 3 0
1 48 3 144
2 96 3 288
0 0 3 0
2 0 3 0
0 0 3 0
1 12 3 36
0 0 3 0
3 18 3 54
3 36 3 108
3 36 3 108
2 12 3 36

624 1872

Query Requirements
3 108 3 324
2 96 3 288
1 16 3 48
0 0 3 0
1 0 3 0
0 0 3 0
3 324 3 972
2 216 3 648
3 0 3 0
0 0 3 0
2 0 3 0
1 1
3 243 3 729
1 108 3 324

1111 3333

Performance & Scale


0 0 3 0
1 18 3 54
0 0 3 0
1 36 3 108
1
0 0 3 0
0 0 3 0

54 162

Availability & Disaster Recovery


1 0 3 0
0 0 3 0
1 0 3 0
0 0

Security
2 0 3 0
1 18 3 54
2 0 3 0
1 162 3 486
3 162 3 486
3 48 3 144
2 0 3 0

390 1170

Platform
1 72 3 216
2 8 3 24
3 162 3 486
3 324 3 972
0 0 3 0
0 0 3 0
1 8 3 24

574 1722

Operational Management
2 0 3 0
2 144 3 432
2 72 3 216
1 12 3 36
2 0 3 0
1 48 3 144

276 828

Deployment Model
1 6 3 18
1 0 3 0
1 0 3 0
3 0 3 0
1 0 3 0
2 0 3 0
6 18
DB8
Comments Rating Score Comments

3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3

ery Requirements
3 972
3 864
3 144
3 0
3 0
3 0
3 2916
3 1944
3 0
3 0
3 0
1
3 2187
3 972

9999

formance & Scale


3 0
3 162
3 0
3 324

3 0
3 0

486

ity & Disaster Recovery


3 0
3 0
3 0
0

Security
3 0
3 162
3 0
3 1458
3 1458
3 432
3 0

3510

Platform
3 648
3 72
3 1458
3 2916
3 0
3 0
3 72

5166

ational Management
3 0
3 1296
3 648
3 108
3 0
3 432

2484

eployment Model
3 54
3 0
3 0
3 0
3 0
3 0
5
54
#

Category Maximum MongoDB PostgreSQL MongoDB


Data modeling 129 129 43 100.0%
Query requirements 81 78 31 96.3%
Performance & scalability 39 39 13 100.0%
Availability & disaster recovery 27 27 9 100.0%
Security 48 43 16 89.6%
Operational management 45 45 15 100.0%
Deployment model 45 42 15 93.3%
Total 414 403 142 97.3%
Data modeling

100.0%
Deployment model Query requirements

50.0% MongoDB
PostgreSQL

0.0%

Operational management Performance & scalability

Security Availability & disaster recovery


#REF!

PostgreSQL
33.3%
38.3%
33.3%
33.3%
33.3%
33.3%
33.3%
34.3%
WEIGHT RATING
Not Used 0 0
Useful 1 1
Needed 2 2
Critical 3 3

You might also like