Performance Testing Results

Lab Configuration

SQL Server 2005 HP Proliant DL 385 G1 2 32 Bits Processors (1.8 Ghz) 4 GB Memory 4 spindles SCSI 10000 RPM

Extensibility Patterns
• Extension Table • Fixed Columns

Extension Table
• Shared table with schema common to all tenants • Shared extension table that contains one row for each field

Extension Table


Extension Table – View
(one join per table)
Shared fields

A view is created for each tenant

Custom fields

• Retrieve a random page of records from a random tenant view
SELECT *, ROW_NUMBER() OVER (ORDER BY Id) as RowNumber FROM entity_tenant(x) Randomizatio WHERE RowNumber >= (#y) and RowNumber <= (#z) n Notes: (#y) and (#z) are random values based on the number of records Page Size = (#z) - (#y)

How Does it Scale?

9 Extension Fields

• With more tenants and rows in the database, as the number of concurrent users increases, the retrieval of randomized rows causes memory pressure and consequently I/O activity due to paging. • Let’s validate this behavior by looking at the next chart…

How Does it Scale?
Tx/sec, CPU use and memory (with side effect of I/O activity)

No relevant I/O Activity

Memory pressure & I/O Activity

25 concurrent users

Measuring the effect of adding fields
Increasing the number of fields impacts on the overall throughput an order of magnitude

While increasing the number of tenants and rows per tenant does not impact in the same way

25 concurrent users

• Retrieve rows filtered by a random field value from a random tenant view
SELECT * FROM entity_tenant(x) WHERE field(n) = ‘value’

How Does it Scale?
GetOne (filteri ng)
The behavior remains similar

GetAll (no filtering)

• Filtering by a value in the extension table has little effect on the overall performance • It’s expected an increase of the processor usage due to the row filtering. This becomes evident in the 0 fields scenario

Extension Table – View
( using PIVOT )

Transpose the results

Create a view for each tenant

Comparing to PIVOT
New SQL 2005 feature, PIVOT, The slope of the PIVOT approach has lower throughput than seems to be better compared to the regular approach. the current query, however 20 CPU and I/O Usage remains fields is already an edge case similar

Current query PIVOT query

To keep in mind
• Indexes help!
– Clustered Index on “tenant id and record id” – Clustered Index on “record id and extension id”

Extension Table


• Get Entity Schema from TenantMetaDataStore DB (retrieves entity fields definition) • Insert Shared Data in TenantDataStore DB • Insert Custom Data in TenantDataStore DB

The effect on Tx/sec and CPU usage

Metadata (3000 tenants and 10 fields or more) Next Steps: As we add is causing memory pressure and concurrent users, consequently I/O throughput increases Analyze how healthy way to Activity in a

improve Metadata Retrieval

Relevan t for inserts

5 extension fields

Data File

Log File

• At 500,000 rows (100 tenants)
– Throughput increases with the number of concurrent users. Resources remain in acceptable levels.

• At 15,000,000 rows (3000 tenants)
– Minimal gain in throughput due to memory pressure and consequently intensive I/O activity (page file).
• Next Steps: Analyze how to improve metadata retrieval

• Analysis hints:
– “Writes” use sequential I/O for log files, “Reads” involve

Fixed Columns

There is a preset number of custom fields in every table you wish to allow tenants to extend

Testing Approach
• Testing performance for the fixed columns approach works against a single, fixed-schema table • This is analogous to the extension table approach but without extension fields (no extension field table)

Fixed Columns


Fixed Columns

Fixed Columns approach scales better than extension table even with a large number of tenants Response time is not affected by concurrency

• Fixed Columns approach (analogous to 0 extension fields) scales better than extension table
– Extension table is CPU intensive when adding fields (98% vs. 35%)

• When adding concurrent users, we gain throughput while response time is not affected • Memory remains the same

Bottom Line
• Developing multi-tenant architectures requires stressing the database to detect the glass ceiling of customers you can handle in a single instance. This involves:
– Developing unit and load tests for each scenario – Generating a massive set of simulated data – Deploying the tests in a rig and

Introducing the Multi-tenant Database Performance Test Guide

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.