You are on page 1of 9

What Features Are Disabled Between

Compatibility Levels In SQL Server?


Posted on March 16, 2016 by Eric Cobb

Have you ever wondered what features may or may not be available when your database is
running in a different compatibility level than your SQL Server version? I recently came across a
situation where a database was on SQL Server 2014 (compatibility level 120), but the database
compatibility level was set to SQL Server 2012 (110). This led me to wonder, what features of
SQL Server 2014 are disabled in SQL Server 2012 compatibility mode? Certainly the new
cardinality esitmator is unavailable, but what about in-memory OLTP? Or backup encryption?

Thankfully, Microsoft has all of that documented here: https://msdn.microsoft.com/en-


us/library/bb510680.aspx You can scroll down the page and see the differences between each of
the compatibility levels, all the way fromSQL Server 2005 (compatibility level 90) to SQL
Server 2016 (compatibility level 130).

As it turns out, many of the new features of SQL Server 2014 are still available when your
database compatibility level is set to 110. So, even though you can’t take advantage of the new
cardinality estimator, you do still have access to cool features like in-memory OLTP and backup
encryption!

https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-compatibility-level?
redirectedfrom=MSDN&view=sql-server-ver15

https://support.microsoft.com/en-us/help/3212023/performance-degradation-when-you-upgrade-
from-database-compatibility-l

https://www.sqlservercentral.com/articles/sql-server-performance-issue-after-an-upgrade

https://stackoverflow.com/questions/4099453/how-do-i-find-out-what-license-has-been-applied-to-
my-sql-server-installation

https://www.sqlshack.com/dba-guide-sql-server-performance-troubleshooting-part-1-problems-
performance-metrics/
https://www.sqlservercentral.com/articles/sql-server-performance-issue-after-an-upgrade

SQL Server performance issue after an upgrade


Kanishka Basak, 2019-12-16

Recently, I came across a problem where the client reported severe performance degradation of
their OLAP system. Most of the reports that were running were either timing out or were
returning data after a long time. The problem started right after the client had undergone an
upgrade, that included the following,

1. Software change, that includes a higher version of the product that my company develops.
2. Migration of OLAP server to a new server with additional memory and CPU.
3. Version change of SQL Server from 2014 edition to 2016 edition(SP2-CU3).

Here are the specifications of the old and new servers:

Old Server

 OS: 2012 R2 Std


 System Model: Proliant DL360 Gen9
 CPU: 28 (CPU E5-2690 v4 @ 2.60GHz)
 RAM: 768 GB

New Server

 OS: 2016 Std


 System Model: Proliant DL360 Gen10
 CPU: 36(Gold 6154 CPU @ 3.00GHz)
 RAM: 1024GB

Investigating the Issue

I started the investigation by looking through the some of the useful DMVs, like
sys.dm_exec_requests, sys.sysprocesses, etc. My initial review revealed pressure on the temp db.
Almost all the process that were running slow were waiting on PAGELATCH wait type.

In fact, other than the above wait type, the fact that tempdb was unable to keep up the transaction
pressure was also evident by the transaction volume that was hitting tempdb. See the images
below (Fig 1 and Fig 2).
Fig 1 - Database transactions

At peak, the transaction rates on temp db was close to 4k/sec.

Fig 2 - tempdb allocation vs the-then tempdb allocation of some of the processes that were
running.

I started checking if the temp db files were set up correctly in relation to the CPUs. I found that
there were only 8 temp db files on a 36 CPU server. This was one of the causes of the slowness,
and I immediately requested the OP DBAs to increase this to 18, following a standard that my
organization follows. We create files equal to ½ of the total CPUs and as needed thereafter.
Though the system performance improved, there were still complains of slowness as users had to
wait for a fair amount of time to have their requests processed. Everything seemed normal with
no major blocking, memory crunch, CPU spike or the disk latency. However, a closer look at the
CPU showed it was seen ranging from 40% to 50%(Fig 3), even during peak activity, whereas,
on a normal condition an average utilization of 80%-85% is fairly normal.

Fig 3 - CPU utilization

When looking at the Task Manager, it was found that around 50% of the CPU were getting
threads (Fig 4).

Fig 4 - Task Manager view of CPUs

Some followup discussions with the hardware team revealed that SQL Server was unable to send
threads across all CPUs due to a non-core-based licensing policy. The problem was every SQL
Server 2016 install that was done used older Server+CAL license media. This edition is limited
to use 20 cores on the host machine.

To fix this, we needed to use core-based licensing instead for all new installs. To fix this problem
in existing servers, a downtime window was approved by the client as it would have required a
SQL restart.
An example of the change that was implemented:

Pre change:

Description: Microsoft SQL Server 2016

ProductName: SQL Server 2016

Type: RTM

Version:13

SPLevel:0

Installation Edition: Enterprise: Server+CAL

Post Change:

Description: Microsoft SQL Server 2016

ProductName: SQL Server 2016

Type: RTM

Version:13

SPLevel:0

Installation Edition: Enterprise: Core-based Licensing

The above change resulted in threads being spread across multiple processors,and, the system
performance increased massively.

This revelation also led to the correction of the overall CPU licensing policies and tempdb set up
during installation, which served as a proactive approach in addressing future client issues, that
possibly could have come at later dates.

Additionally, two of the client stored procedures were identified as a 'heavy hitter', in terms of
CPU and IO consumption, and, efforts were put in to replace them with an optimized version,
that had a better coding approach. The older version of these stored procedures were performing
a very expensive operation, through a single transaction, and, this was broken down to smaller
batches,along with replacement of CTEs with temp tables.
https://docs.microsoft.com/en-us/sql/relational-databases/performance/cardinality-estimation-sql-
server?view=sql-server-ver15

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/6cbb342f-1fcc-4a41-929d-
b46f564e7031/query-running-slow-in-2016-had-been-running-fast-in-2012?forum=transactsql

https://www.sqlconsulting.com/archives/big-performance-problems-with-the-cardinality-estimator/

https://www.google.com/search?
q=Performance+Problems+with+the+Cardinality+Estimator&rlz=1C1CHBF_enUS904US907&oq=Perform
ance+Problems+with+the+Cardinality+Estimator&aqs=chrome..69i57j69i60&sourceid=chrome&ie=UTF-
8

https://support.microsoft.com/en-us/help/4522127/fix-poor-query-performance-due-to-low-
cardinality-estimation-in-sql-se

https://www.perftuning.com/blog/sql-server-2014-cardinality-estimator/

https://www.brentozar.com/archive/2018/09/should-you-use-the-new-compatibility-modes-and-
cardinality-estimator/

Should You Use the New


Compatibility Modes and Cardinality
Estimator?
September 12, 2018

Brent Ozar

Execution Plans

12 Comments

For years, when you right-clicked on a database and click Properties, the “Compatibility Level”
dropdown was like that light switch in the hallway:
You would flip it back and forth, and you didn’t really understand what it was doing. Lights
didn’t go on and off. So after flipping it back and forth a few times, you developed a personal
philosophy on how to handle that light switch – either “always put it on the current version,” or
maybe “leave it on whatever it is.”

Starting in SQL Server 2014, it matters.


When you flip the switch to “SQL Server 2014,” SQL Server uses a new Cardinality Estimator –
a different way of estimating how many rows are going to come back from our query’s
operations.

For example, when I run this query, I’ll get different estimates based on which compatibility
level I choose:

1 SELECT COUNT(*)
2   FROM dbo.Users
3   WHERE DisplayName LIKE 'Jon Skeet'
4     AND Reputation = 1;

When I set my compatibility level at 110, the SQL Server 2012 one, I get an estimated 239 rows:
Compatibility level 2012, the old CE

Whereas compatibility level 120, SQL Server 2014, guesses 1.5 rows:

Compatibility 2014, the newer CE

In this case, SQL Server 2014’s estimate is way better – and this can have huge implications on
more complex queries. The more accurate its estimates can be, the better query plans it can build
– choosing seeks vs scans, which indexes to use, which tables to process first, how much
memory to allocate, how many cores to use, you name it.

You read that, and you make a bad plan.


You read that the new Cardinality Estimator does a better job of estimating, so you put it to the
test. You take your worst 10-20 queries, and you test them against the new CE. They go faster,
and you think, “Awesome, we’ll go with the new compatibility level as soon as we go live!”

So you switch the compat level…and your server falls over.

It goes to 100% CPU usage, and people scream in the hallways, cursing your name. See, the
problem is that you only tested the bad queries: you didn’t test your good queries to see if they
would get worse.

Instead, here’s how to tackle an upgrade.


 Go live with the compat level you’re using today
 Wait out the blame game (because anytime you change anything in the infrastructure, people
will blame your changes for something that was already broken)
 Wait for the complaints to stabilize, like a week or two or three
 On a weekend, when no one is looking, flip the database into the newest compat level
 If CPU goes straight to 100%, flip it back, and go about your business
 Otherwise, wait an hour, and then run sp_BlitzCache. Capture the plans for your most resource-
intensive queries.
 Flip the compat level back to the previous one

On Monday morning, when you’re sober and ready, you compare those 10 resource-intensive
plans to the plans they’re getting in production today, with the older compat level. You research
the differences, understand whether they would kill you during peak loads, and start prepping for
how you can make those queries go faster under the new CE.

You read Joe Sack’s white paper about the new CE, you watch Dave Ballantyne’s sessions about
it, and you figure out what query or index changes will give you the most bang for the buck.
Maybe you even resort to using hints in your queries to get the CE you want. You open support
cases with Microsoft for instances where you believe the new CE is making a bad decision, and
it’s worth the $500 to you to get a better query plan built into the optimizer itself.

Or maybe…

Just maybe…

You come to the realization that the old CE is working good enough for you as it is, and that
your developers are overworked already, and you can just live with the old compatibility level
today. After all, the old compatibility level is still in the SQL Server you’re using. Yes, at some
point in the future, you’re going to have to move to a newer compatibility level, but here’s the
great part: Microsoft is releasing fixes all the time, adding better query plans in each cumulative
update.

For some shops, the new CE’s improvements to their worst queries are worth the performance
tuning efforts to fix their formerly-bad queries. It’s totally up to you how you want to handle the
tradeoff – but sometimes, you have to pay for the new CE in the form of performance tuning
queries that used to be fast.

You might also like