You are on page 1of 256

Bedford Seminars

BedfordSeminars.com

Spring 2020 CAS Exam 5 Study Manual

James Bedford, FCAS, MAAA


Jim@BedfordSeminars.com
Table of Contents

Introduction ................................................................................................................................. 5
Studying Tips ............................................................................................................................... 7
Reserving Material
Chapter 1 – Overview ............................................................................................................. 11
Chapter 2 – Claims Process .................................................................................................. 15
Introduction to Part 2 ............................................................................................................. 19
Chapter 3 – Data ....................................................................................................................... 21
Chapter 4 – Meeting with Management .......................................................................... 27
Chapter 5 – Development Triangle ................................................................................... 29
Chapter 6 – Diagnostics ......................................................................................................... 35
Introduction to Part 3 ............................................................................................................. 43
Chapter 7 – Development Technique ............................................................................... 45
Chapter 8 – Expected Claims Technique ......................................................................... 55
Chapter 9 – Bornhuetter-Ferguson ................................................................................... 59
Chapters 7-9 Wrap Up ............................................................................................................ 63
Chapter 10 – Cape Cod ........................................................................................................... 67
Chapter 11 – Frequency Severity ....................................................................................... 71
Chapter 12 – Case Outstanding Development .............................................................. 81
Chapter 13 – Berquist-Sherman ......................................................................................... 85
Chapter 14 – Recoveries ........................................................................................................ 95
Chapter 15 – Evaluation of Techniques ........................................................................... 99
Introduction to Part 4 .......................................................................................................... 101
Chapter 16 – Estimating ALAE ......................................................................................... 103
Chapter 17 – Estimating ULAE ......................................................................................... 105
Ratemaking Material
Ratemaking Notation ........................................................................................................... 115
Chapter 1 – Introduction .................................................................................................... 117
Chapter 3 – Ratemaking Data ........................................................................................... 123
Chapter 4 – Exposures ........................................................................................................ 129
Chapter 5 – Premium ........................................................................................................... 137
Chapter 6 – Loss and LAE .................................................................................................. 153
Chapter 7 – Other Expenses and Profit......................................................................... 169
Chapter 8 - Overall Indication .......................................................................................... 177
Chapter 9 - Traditional Risk Classification .................................................................. 181
Chapter 10 - Multivariate Classification ....................................................................... 193
Chapter 11 - Special Classification ................................................................................. 201
Chapter 12 – Credibility...................................................................................................... 215
Chapter 13 - Other Considerations ................................................................................ 224
Chapter 14 – Implementation .......................................................................................... 229
Chapter 15 - Commercial Lines Rating Mechanisms ............................................... 241
Chapter 16 - Claims-Made Ratemaking ........................................................................ 251
Introduction
Overview
Like many students, I found that much of the exam syllabus materials and available study guides/seminars left
a lot to be desired. Explanations often lacked intuition, resulting in increased study time, memorization, and
sub-par understanding. After completing my Fellowship, I decided to build the exam study material I wish I had
when studying for exams.

Rather than simply regurgitating the “what” or “how” of the material, I teach the student “why”. As a current
practitioner I have a deep understanding of the material and extensive experience explaining complex topics
in an easy to understand and intuitive way. To successfully pass exams at a comfortable rate in today’s
environment you need to fully understand the material on a deep and intuitive level.

To enhance my manual, I have taken what I have learned from reading the research about how we best learn
and retain information, including optimal study techniques for maximum understanding and retention, along
with my own experiences, and incorporated it into the design of the study manual, practice problems, and
review (e.g., spaced repetition, retrieval practice, interleaving, among others, as well as study habits to avoid).
In addition to the design and order, the study manual includes tips for effective studying and exam taking—
along with many multi-stage diagrams that visually show how concepts tie together, assisting in the
understanding of the bigger picture.

Original Practice Problems


The original practice problems are designed to maximize understanding and learning, they are not necessarily
the same length, difficult, or format of actual exam questions. They are meant to be done closed book as they
are designed to practice your knowledge retrieval, which will increase your retention of the knowledge. Look
to the CAS exam questions to gauge your expectations for exam questions.

Excel Exhibits
I have provided working Excel exhibits containing all end of chapter exhibits and appendices from the text.
These can be reviewed after reading the text or manual. Having working formulas in Excel makes following the
exhibits much easier than reading exhibits in the text. Please note that there are often small differences
between these exhibits and the values in the textbook due to rounding.

Condensed/Summary Notes
These notes contain chapter overviews summarized down to one page or less. They are good for intermediate
review of the main topics and concepts of each chapter.

Advanced Problem Set


The advanced problem sets are often more involved than you would expect to see the actual exam. These
problems have been designed to maximize understanding and learning, and are mean to be completed after
working through the entire manual and original practice problems. Also, my answers to these questions are
often more in depth than I would recommend for an answer on the actual exam.

Flashcards
Flashcards are provided and I highly recommend you use them in your review. Optimal use of flashcards is
covered in the study tips section.

Past CAS Exam Problems


CAS questions from the current exam format (2011-present) have been included at the end of each chapter,
with the exception of the past three exam sittings. These problems have been excluded from the end of chapter
problems so you can take them as practice exams toward the end of studying I think the insights in the
examiner’s reports are extremely valuable, so I have used these reports as the solutions to the problems
(with revisions and additions as needed).

Exam problems very often cover several chapters at the same time— in these situations I have assigned the
problem to the last chapter covered in the problem so you have all of the information you need to solve it. In
addition, this helps with spaced repetition (covered in study tips). I errored on the side of including too many
problems in Chapter 15 of the reserving material as a catch all for compare and contrast type problems— do
not let this cause you to underestimate the importance of the earlier chapters since they are critical to answer
these questions.

Sample Handwritten Solutions


While the examiner’s reports provide valuable commentary on solutions and common mistakes, the solutions
provided are often much longer and more detailed than required to receive full credit. As such, I have included
handwritten solutions to illustrate my recommended calculation detail and length of answer under exam time
constraints. For that reason, I tried to write them quickly as though under exam conditions— poor handwriting
and scratched out changes included. On that topic, don't worry too much about your handwriting— if it is even
mildly legible, they will figure it out (I have seen people with atrocious handwriting pass exams without issue).

On pen/paper exams students often struggle with time since many students write way too much, or way too
nice. I use bullet points a lot of the time— you aren't going to pick up any points for forming a proper
sentence or paragraph, as long as you demonstrate your knowledge. I used pen, and would suggest you do the
same as it keeps you from wasting time erasing— simply cross out anything you do not want graded. You will
also notice I use a lot of abbreviations which are fine to use on the exam (within reason). The samples are
solutions for the reserving problems on three earlier exams. Keep in mind the CAS often accepts more than
one answer, so my solutions are not necessarily the only correct solution (see the examiner’s reports).

Practice Exams
I have created one original practice exam. You should take this, along with the four most recent exams that I
held out of the end of chapter problems, under exam conditions closer to the exam date once you have finished
your second pass through the material (approximately a month out from the exam date). My original practice
exam is on the long side so don’t get too discouraged if you struggle with time on it.

Printing
I have inserted blank pages throughout the manual to ensure the first page of each section prints on the front
of a page when printed double sided. As such, I suggest you print double sided (if you chose to print), as it will
make the physical manual more manageable, and avoid printing blank pages. In addition, many of my examples
and exhibits include color coding, so I very highly recommend you print in color. The practice problem sets are
extremely extensive and I do not recommend printing them. I have included bookmarks within the PDFs to
make electronic navigation easier.

About the Author


I am a practicing Actuary with a leading global consulting firm. Through my experience in consulting, and
working for primary insurers before that, I have gained extensive knowledge in ratemaking and reserving. I am
a Fellow of the Casualty Actuarial Society, and Member of the American Academy of Actuaries.

Errata
Please send errata to Jim@BedfordSeminars.com. Known errata can be found at bedfordseminars.com/errata
Bedford Seminars Studying and Exam Tips

Studying Tips:

Research on learning shows three of the most effective methods for learning and retaining information are
spaced repetition, retrieval practice, and interleaving. If you are able to incorporate these concepts into your
studying, you will undoubtedly make enormous gains in efficiency and retention of the material.

1) Spaced repetition:

How many times have you studied something, only to move on to later material, then have a hard time
remembering the initial topic once you have returned several weeks or months later? This is a common
problem when learning something new and is an especially large problem for CAS exams where the amount
of material you need to know is enormous.

Spaced repetition is the concept of increasing the intervals of time between review of material to take
advantage of the spacing effect (learning is greater when spread out over time, as opposed to spending the
same amount of time in condensed sessions). Doing some problems on a topic initially, then returning to
do some more, say a week later, followed by a month later will drastically increase your learning and
retention at the end as compared to doing all of the problems on the topic in bulk shortly after learning the
material and not returning to the topic for a long period.

Flashcards are also very helpful in this regard. I personally preferred physical flashcards, but there are
many websites and apps, such as Anki, that are built on the concept of spaced repetition. I would start using
flashcards early, and add in flashcards for additional chapters as I progressed. I would keep the flashcards
in groups, for example splitting the ratemaking and reserving material each into thirds; as time passed I
would increase the time between reviewing the earlier cards as I focused more on the recent cards. This
had the effect of cycling through the material while incorporating spaced repetition into the review (i.e.,
the lag between learning and initial review was short, but became longer for subsequent reviews). This
allowed me to keep everything fresh in my mind as I learned more material.

As I was studying this I would make sure to always cycle back to do problems/flashcards on prior material
while learning and progressing through the new material. It was like cycling through spaced repetition on
a topic while adding more material as the need for repetition on the old material declined. I have tried to
incorporate this concept into the layout of my manual and practice problems, but it is something you should
keep in mind as you decide on how to layout your studying and flashcard review.

2) Retrieval Practice:

How often have you been stuck on a practice problem and glanced at the answer and thought you yourself
“oh yeah, I knew that” and moved on? If you’re anything like me (at least early in my exam journey) you
know that process all too well—unfortunately it is terrible for learning and retention. You learn much
better when you wrestle with a new problem before being shown the solution. Looking at the answer too
soon only fuels the illusion of knowing – it is easy to feel as though you really understand something after
you have just read it. Being able to retrieve the information unassisted is the true test of knowledge and
reinforces your understanding.

You obviously aren’t going to be able to glance at the answer on the real exam, so you are only cheating
yourself by doing so too early while practicing. Obviously, you won’t know all the answers while studying,
and learning how to solve those you don’t know is valuable, the point is to make sure you put forth your
best effort before looking at the answer. When using flashcards don’t cheat yourself by looking at the back
too soon.

Spring 2020 © James Bedford, FCAS Page 7


Bedford Seminars Studying and Exam Tips

Struggling to retrieve information helps in learning, so it is very important to work as hard as you can to
solve a problem on your own before looking at the answer—by giving up too early you are only fooling
yourself, and it will mask the areas where you lack complete understanding. Difficulties in learning and
retrieval only make learning stronger—when learning is easy, it is often superficial and soon forgotten.

3) Interleaving:

Interleaving is the practice of reviewing/practicing related skills in parallel, rather than separately in bulk.
Interleaving your review and practice greatly helps expand both your knowledge of a given topic, and how
they are related to each other. This is particularly important for this exam as so much of the material is so
closely related to other topics, even between the ratemaking and reserving sections. Interleaving will also
help you with knowledge retrieval come exam day. Studies have shown students who interleave their
review and practice problems achieve significantly greater results than those students who review and
practice the material in separate blocks.

Interleaving and spaced repetition work well together—you can cycle through initial material in longer
intervals, at the same time you are incorporating new material. I strongly suggest you do some of the
practice problems for a given chapter initially, then return to do more after doing the same for the next
chapter, then return later to finish them. Spacing them out (in increasing intervals) and interleaving them
with new topics will greatly increase your retention of the material.

It is also important for you to interleave topics in your flashcard review as well, as opposed to reviewing
cards in bulk by chapter.

Below are some additional tips for studying from the literature on the topic (many are new, some are ancillary
to the primary three):

1) Break up your study sessions: it is better to study in shorter sessions than longer (assuming you study the
same amount in total). For example, if you want to study four hours on a Saturday your time will be much
better spent studying in four one hour blocks than a single four hour block (the exception being practice
exams as you want to take those under exam conditions).

2) Vary your study environment: your mind can associate information with the environment you study it, and
you won’t be taking the exam in the same place you are studying, so it is best to vary your studying
environment from time to time.

3) Get enough sleep: getting enough sleep is critical to retaining new information as your mind organizes itself
during sleep.

4) Incorporate elaboration: use your own words to explain concepts and make connections to prior
knowledge.

5) Learning paragraphs (free recall): write what you can remember about what you just read (i.e., “brain
dumps”).

6) Use mnemonic devises to assist in remembering lists or arbitrary information.

7) Don’t waste time “sliding words through your mind”, it may feel like you are studying and will help rack up
the hours, but will not prove beneficial to retaining or understanding the material.

8) Write your own questions: I found it extremely beneficial when learning new material to try to think of
possible exam questions that could be asked, or what I would ask if writing questions for the exam. Along
those same lines, when studying always ask yourself “why” and do your best to answer that question.

Spring 2020 © James Bedford, FCAS Page 8


Bedford Seminars Studying and Exam Tips

9) Be able to explain the concept to your boss or coworker: if you understand the material to a level you would
be comfortable explaining it to someone else then you should be in pretty good shape for the exam.

10) Avoid massed practice: doing all of the problems on a topic in bulk shortly after learning it is not nearly as
effective as spreading those problems out over time, and interleaving them with other topics. Massed
practice is a very common practice among actuarial students, and it is very bad for retaining information.
Doing end of chapter problems in smaller blocks, mixed with other chapters as well, will give you a large
advantage in learning the material.

11) Re-reading and highlighting have minimal, if any, benefits on learning. Both lead to the illusion of knowing
(illusion of fluency). Your time is much better spent self-quizzing, forcing yourself to retrieve the
information from memory whenever possible.

If you would like more information I would highly recommend the following books: “Make It Stick: The Science
of Successful Learning” and “How We Learn: The Surprising Truth About When, Where, and Why It Happens”.

Exam Taking Tips:

1) Use the 15-minute reading period wisely: make a plan for order of attack. Once you have finished
reading don’t sit around waiting, instead start thinking about what you are going to write, or how you
are going to solve the first problem you are going to attempt. I always preferred to start with problem
#1 and progress numerically, skipping problems and returning later if I got stuck. I don’t know if there
is an optimal plan of attack (e.g., easy problems first, hard problems first, etc.), but if there was a
problem I knew was going to take an especially long time or be especially difficult I would always save
it for last. You can also use the reading time to identify problems that have an especially high or low
point per difficulty (or length/time), as it is extremely important to pick up all of the easy points.

2) Be aware of the time. It is easy to get lost “in the zone” and lose track of time. Time can seem to move
especially quick in the exam setting. It is important to look up at the clock from time to time to ensure
you are moving at an appropriate pace.

3) Squeeze out points from all problems, even if you cannot solve them. Write formulas, or write how you
would solve the problem in general steps if nothing else (useful if you are running out of time). If stuck
on an early piece of the problem that is used later, write that you are assuming an answer and move
on using that assumption (you will only lose points for the first part of the problem).

4) Use bullets and concise answers whenever possible. You are not going to gain points for length or
quality of sentences. The important point is demonstrating your understanding of the concept.

5) In general, write out formulas—if in doubt do so (there are exceptions). If you are using the same
formula or calculation many times in a problem (e.g., down a row, or within a triangle), show the
formula or calculation once then simply show the result for the rest (i.e., you don’t need to write out
the same calculation long hand several times).

6) Don’t contradict yourself— meaning don’t hedge your bets if you are not sure of an answer (can be a
common problem in written answers you are not completely sure of your answer).

7) Do your best to stay calm—doing many practice exams under exam conditions, and in various
environments, will help with this.

8) Don’t waste time if you get stuck. Time is your most valuable asset during the exam, so don’t waste too
much of it trying to pick up a point or two on a difficult problem when you could instead use that time
to pick up the easier points elsewhere. Don’t be afraid to move on and come back later if you are stuck.

Spring 2020 © James Bedford, FCAS Page 9


Bedford Seminars Studying and Exam Tips

9) Know where the bathroom is. I would almost always take one bathroom break during the exam, but I
would also use that time to clear my mind, or think through a question I was struggling with. Taking a
short break can be beneficial.

10) Do your best to focus on the question at hand and not worry about the one you skipped or struggled
with—the answer is more likely to pop into your mind when thinking about something else or have
stopped thinking about it and returned.

11) Don’t waste your time erasing, simply cross out anything you don’t want counted. For this reason, I
used pen on the exam— so I wasn’t tempted to erase. Nothing you cross out will be graded. Pen also
shows up better when copied (the CAS will scan your answer sheets to send to graders) and writes
faster.

If this is your first upper level CAS exam, there is a good sticky post at the top of the Exam 5 forum on the
Actuarial Outpost covering what to expect and what you should plan to bring on exam day.

Spring 2020 © James Bedford, FCAS Page 10


Estimating Unpaid Claims Using Basic Techniques Chapter 1: Overview

Estimating Unpaid Claims Using Basic Techniques

Chapter 1 – Overview
Introduction

Insurance is fundamentally different than most products because insurers do not know the true cost of the
product they sell until many years after it is sold (or in some cases decades). Insurance is a promise to pay the
policyholder, or injured party on behalf of the policyholder, in the event a covered event occurs. Despite the
ultimate value of these claims not being known for many years the insurance companies must report their
financial results, including unpaid claim and claims expense liability estimates. It is the actuaries job to estimate
these unpaid claims and expenses, and that is the focus of this text. In addition to producing accurate financial
reports, it is important for insurance companies to accurately estimate unpaid claims and expenses so they can
hold adequate reserves to pay claimants who purchased their insurance.

Additionally, the accurate estimation of unpaid claims is important to:

1) Internal managers: accurate estimates of unpaid claims are crucial for proper decision making in
pricing, underwriting, strategical, and financial decisions. Ultimate loss estimates are used in
ratemaking, so understated ultimate losses would lead to insufficient rate levels (and vice-versa).
Inaccurate rates can cause significant issues for insurers— insolvency if rates are insufficient
(especially if growing) or loss of market share if rates are redundant. Inaccurate estimates can also
effect decisions on where to increase/decrease business written, reinsurance purchasing decisions,
and capital management.
2) Investors: financial statements are relied upon by investors in making their investment decisions.
3) Insurance Regulators: rely on financial statements in making regulatory decisions. If reserves are
understated, masking the true financial position, the regulator may be too late in reacting to a
distressed insurer.

Additionally, state laws require insurers to maintain reserves equal to the estimate of unpaid claims and claim
adjustment expenses for all claims incurred to date, whether reported or not.

Beginning in 1990 the National Association of Insurance Commissioners (NAIC) required most P&C insurers to
obtain a Statement of Actuarial Opinion regarding the reasonableness of carried loss and loss adjustment
expense reserves shown in the statutory annual statement.

We will use paid and reported claim data to date to predict future paid and reported claims. This text will cover
many techniques to accomplish this— the methods will vary from very basic to somewhat complex. The
methods will use historic data, with varying adjustments (and in some cases no adjustments), to predict future
claims.

The CAS Statement of Principles Regarding Property and Casualty Unpaid Claims Estimates provides the
following three principles for the evaluation, review, and estimation of property and casualty related unpaid
claims.

Principle 1: An unpaid claims estimate for a defined group of claims is reasonable if it is derived from
reasonable assumptions and appropriate methods or models and the reasonableness of the estimate has been
validated by appropriate indicators or tests, all evaluated consistent with the review date and valuation date
in the context of the intended measure.

Principle 2: An unpaid claims estimate for a defined group of claims relative to a particular accounting date is
inherently uncertain. This uncertainty stems from a dependence of the amount of future claims payments on
facts and circumstances that are unknown when the estimate is made. The predictability of future claims
payments may be subject to additional limitations, such as the availability and usefulness of data for making an
unpaid claim estimate. The uncertainty inherent in the unpaid claims estimate implies that a range of estimates
can be reasonable.

Spring 2020 © James Bedford, FCAS Page 11


Estimating Unpaid Claims Using Basic Techniques Chapter 1: Overview

Principle 3: The actual amounts that will be paid for the defined group of claims likely will differ from the
estimated future payments implied by a reasonable unpaid claims estimate. The actual future payments can be
known with certainty only when all payments for such claims have been made.

Key Terminology

The unpaid claim estimate can be broken down into five components:
1) Case outstanding (on known claims): this reserve is set by the claims department, third party
adjustors, or independent adjustors for known and reported claims only (the future expected
payments for claims already reported; e.g., a claim has been reported, we have made a partial payment
of $1,500 but expect to pay out an additional $3,500 in the future—the $3,500 is the case reserve).
2) IBNER: a provision for future development on known (case) reserves (incurred but not enough
reported; e.g., the claims adjuster expects $3,500 in future payments, but the actuary expects an
additional $250 on top of the case reserve).
3) Reopen: a provision for expected costs due to the reopening of claims (e.g., closed claims will have case
reserves of $0, but there may be an amount greater than $0 we expect to pay out in the future due to
claims re-opening).
4) IBNYR: provision for claims incurred but not yet reported (this is the narrow definition of IBNR,
or "pure IBNR"). IBNYR comes about due the lag between a covered event occurring, and it being
reported to the company— in some lines of business this time lag can be very long. Some examples
include:
i. A worker is injured on the job but does not report the injury immediately, or the injury
becomes worse overtime before the worker decides to report it.
ii. A storm hits a home, but the homeowner first takes an inventory of the damage and tends to
immediate needs before filing the claim.
iii. A surgeon makes a mistake during surgery, but the error and its effects are not discovered for
years, thus not reported until many years after the covered event (the surgeons error).
5) In transit: a provision for claims reported, but not yet recorded within the database/system (e.g., the
claimant has reported the claim, but the claim has not been processed and included in the data used
by the actuary).

Actuaries often refer to the sum of items 2-5 as the broad definition of IBNR. The broad definition of IBNR is
what we will use going forward— it is the difference between the estimated ultimate claims and the reported
claims to date. The five components of unpaid claims, along with the paid claims, make up ultimate claims, and
are shown in the following diagram:
Reported

Paid
Case
Ultimate

IBNER
Unpaid

Reopened
(Broad)
IBNR

IBNYR
In transit

The above examples examine individual claims, but in general we will examine groups of claims, so while a
given claim may have no case reserve or IBNR, the group of claims may. The following chapters will examine
aggregating individual claims into claim triangles containing information from a cohort of claims for analysis.

Spring 2020 © James Bedford, FCAS Page 12


Estimating Unpaid Claims Using Basic Techniques Chapter 1: Overview

It is important to get comfortable with the various pieces of ultimate claims and be able to move quickly
between them:
Reported Claims= Paid + Case

Unpaid Claims= Case + IBNR (Broad)

Ultimate Claims= Reported Claims + IBNR (Broad) = Paid Claims + Unpaid Claims

The difference between Unpaid Claims and IBNR is that Case Reserves are included in Unpaid Claims.

Additionally, using the broad definition of IBNR, we can visualize the pieces of ultimate claims as:

Ultimate

Reported

Paid Case IBNR

Unpaid

“Incurred losses” is terminology commonly used by actuaries to mean reported losses (paid + case), but this
can cause confusion because in most financial reporting “incurred losses” includes IBNR (IBNR is incurred but
not reported after all), but often in everyday actuarial language incurred losses are not meant to include IBNR.
Some actuaries use “case incurred” or “incurred on reported” to be more specific that they are referring to paid
+ case, excluding IBNR. To avoid confusion, the text uses the term “reported” rather than “incurred” to refer to
paid claims + case reserves.

The text uses “claims” rather than “losses” to describe the dollar amount of loss. This is because “claims” is used
more frequently in the actuarial standards of practice in the U.S. and Canada and in financial reporting. This
can be confusing because in practice most companies and actuaries refer to dollar amounts as “losses” rather
than “claims”. Just remember, this text (and manual) use “claims” to represent the dollar amount of loss, and
“claim counts” to represent the number of claims.

Claims related expenses (total adjustment expenses) are the sum of allocated loss adjustment expense (ALAE)
and unallocated loss adjustment expense (ULAE). ALAE includes expenses that can be allocated to individual
claims, such as legal fees paid to outside counsel hired to defend a specific claim. ULAE includes expenses that
cannot be allocated to individual claims, such as the salaries of claims department personnel.

ASOP 43: Property Casualty Unpaid Claims Estimates

This actuarial standard of practice (ASOP) provides guidance to actuaries when performing professional
services relating to the estimation of loss and loss adjustment expense for unpaid claims for P&C coverages.

Purpose or Use of the Unpaid Claim Estimate— The actuary should identify the intended purpose or use of the
unpaid claim estimate. Potential purposes or uses of unpaid claim estimates include, but are not limited to,
establishing liability estimates for external financial reporting, internal management reporting, and various
special purpose uses such as appraisal work and scenario analyses.

The actuary should identify the intended measure of the unpaid claim estimate. Examples of various types of
measures for the unpaid claim estimate include, but are not limited to, high estimate, low estimate, median,
mean, mode, actuarial central estimate, mean plus risk margin, actuarial central estimate plus risk margin, or
specified percentile. The actuarial central estimate represents an expected value over the range of reasonably
possible outcomes. Such range of reasonably possible outcomes may not include all conceivable outcomes, as,
for example, it would not include conceivable extreme events where the contribution of such events to an
expected value is not reliably estimable. An actuarial central estimate may or may not be the result of the use
of a probability distribution or a statistical analysis. This description is intended to clarify the concept rather

Spring 2020 © James Bedford, FCAS Page 13


Estimating Unpaid Claims Using Basic Techniques Chapter 1: Overview

than assign a precise statistical measure, as commonly used actuarial methods typically do not result in a
statistical mean. The terms “best estimate” and “actuarial estimate” are not sufficient identification of the
intended measure, as they describe the source or quality of the estimate but not the objective of the estimate.

The actuary should also identify whether the unpaid claim estimate is to be gross or net of specified
recoverables and the claims to be covered by the unpaid claim estimate (for example, type of loss, line of
business, year, and state)

Constraints on the Unpaid Claim Estimate Analysis— Sometimes constraints exist in the performance of an
actuarial analysis, such as those due to limited data, staff, time or other resources. Where, in the actuary’s
professional judgment, the actuary believes that such constraints create a significant risk that a more in-depth
analysis would produce a materially different result, the actuary should notify the principal of that risk and
communicate the constraints on the analysis to the principal.

Materiality— The actuary may choose to disregard items that, in the actuary’s professional judgment, are not
material to the unpaid claim estimate given the intended purpose and use.

Unpaid Claim Estimate Analysis—The actuary should consider factors associated with the unpaid claim
estimate analysis that, in the actuary’s professional judgment, are material and are reasonably foreseeable to
the actuary at the time of estimation. The actuary is not expected to become an expert in every aspect of
potential unpaid claims.

Methods and Models—The actuary should consider methods or models for estimating unpaid claims that, in
the actuary’s professional judgment, are appropriate. The actuary should consider the use of multiple methods
or models appropriate to the purpose, nature and scope of the assignment and the characteristics of the claims
unless, in the actuary’s professional judgment, reliance upon a single method or model is reasonable given the
circumstances. If for any material component of the unpaid claim estimate the actuary does not use multiple
methods or models, the actuary should disclose and discuss the rationale for this decision in the actuarial
communication.

Assumptions— The actuary should consider the reasonableness of the assumptions underlying each method
or model used. The actuary should consider the sensitivity of the unpaid claim estimates to reasonable
alternative assumptions. When the actuary determines that the use of reasonable alternative assumptions
would have a material effect on the unpaid claim estimates, the actuary should notify the principal and attempt
to discuss the anticipated effect of this sensitivity on the analysis with the principal. When the principal is
interested in the value of an unpaid claim estimate under a particular set of assumptions different from the
actuary’s assumptions, the actuary may provide the principal with the results based on such assumptions,
subject to appropriate disclosure.

Uncertainty— The actuary should consider the uncertainty associated with the unpaid claim estimate analysis.
The actuary should consider the purpose and use of the unpaid claim estimate in deciding whether or not to
measure this uncertainty.

Reasonableness—The actuary should assess the reasonableness of the unpaid claim estimate, using
appropriate indicators or tests that, in the actuary’s professional judgment, provide a validation that the unpaid
claim estimate is reasonable.

Presentation—The actuary may present the unpaid claim estimate in a variety of ways, such as a point estimate,
a range of estimates, a point estimate with a margin for adverse deviation, or a probability distribution of the
unpaid claim amount. The actuary should consider the intended purpose or use of the unpaid claim estimate
when deciding how to present the unpaid claim estimate.

In the case when the unpaid claim estimate is an update of a previous estimate, the actuary should disclose
changes in assumptions, procedures, methods or models that the actuary believes to have a material impact on
the unpaid claim estimate and the reasons for such changes to the extent known by the actuary. This standard
does not require the actuary to measure or quantify the impact of such changes.

Spring 2020 © James Bedford, FCAS Page 14


Estimating Unpaid Claims Using Basic Techniques Chapter 2: The Claims Process

Estimating Unpaid Claims Using Basic Techniques

Chapter 2 – The Claims Process


Introduction

It is important for the actuary to understand the entire claims process as we rely on numerous historical
valuations of claims to estimate unpaid claims. Case reserves are set by claims professionals, and make up less
than 50% of the unpaid claims as a whole in the U.S. insurance industry, but that percentage varies enormously
by line of business. Actuaries are responsible for estimating the IBNR (the broad definition from Chapter 1)
which makes up over 50% of the unpaid losses in the U.S. insurance industry.

Claims professionals (a.k.a. claims examiners or claims adjustors) can be either internal or external employees.
Large insurers often employ internal claims professionals, while small insurers and self-insurers generally hire
outside professionals called TPAs (third party adjustors). Additionally, an independent adjustor (IA) may be
hired to handle individual large claims, groups of claims, specific types of claims, claims in certain regions, or
when there is a large volume of claims (such as after a hurricane). Actuaries will evaluate paid claims and case
reserves (as set by the claims professionals) at numerous consecutive valuations (e.g., month end, quarter end,
or year-end) to estimate unpaid claims. Remember, paid claims + case reserves= reported claims.

The Life Cycle of a Claim

The life cycle of a claim begins when it is first reported. The adjustors first decision is whether or not the claim
is covered by the policy. They will determine this by reviewing the specifics of the claim and the policy. Items
reviewed to determine coverage include the effective dates of the policy, date of claim occurrence, terms and
conditions of the policy, policy exclusions, endorsements, limits and deductibles, reporting requirements, loss
mitigation requirements, and assignment of fault. If the adjustor concludes a liability exists, they will establish
an initial case reserve. Case reserves can be set by formula (e.g., U.S. workers compensation tabular value) or
by the information known at the time of review, and judgment (under such a scenario, the accuracy of case
reserve estimates will become more accurate as time passes and more information in known and available to
the adjuster).

It is important to know there are several different methods used by insurers to set case reserves, and the types
of claims expenses included in the case reserve. Insurers may set case reserves based on their best estimate of
the ultimate cost, the maximum value (policy limits), or on the advice of legal counsel (either the most likely
outcome [the mode] or the expected value of possible outcomes). The expenses included in case reserves may
also vary— case reserves may include claim amounts only, claims plus ALAE, claims and ALAE separately, and
may include (net of) or exclude (gross of) salvage and subrogation. Case reserves will be set net of reinsurance
(the companies retained piece) and gross of reinsurance (the ground up value). As you can see here, and as we
will stress later, it is incredibly important to understand what is included in the data you are using (both on the
job and during the exam).

Over the life of a claim, its case outstanding and paid amount will vary—partial claim payments can be made,
case outstanding can be adjusted for payments made or new information discovered. Payments for claim
defense litigation may be made, and the claim can close (and potentially re-open). It is not uncommon for claims
to be open for several years, or even decades in lines of business such as workers’ compensation.

The following claim example (from the text, but expanded here) illustrates that estimates of claim values can
change over time, and there may be many different types of payments associated with a claim.

Spring 2020 © James Bedford, FCAS Page 15


Estimating Unpaid Claims Using Basic Techniques Chapter 2: The Claims Process

Policy Eff. Date: December 1, 2007


Policy End Date: November 30, 2008
Accident Date: November 15, 2008 (1) (2) (3) (4) (5) (6)
Incremental Incremental Incremental Cumulative Case Reported Value
Transaction Date Transaction Description Payment Case O/S Reported Paid to Date Outstanding of Claim to Date
Case O/S of $15,000 established for claims
February 20, 2009 $0 $15,000 $15,000 $0 $15,000 $15,000
only when claim reported
Partial payment of $1,500 (w/ corresponding
April 1, 2009 $1,500 ($1,500) $0 $1,500 $13,500 $15,000
reduction in case)

May 1, 2009 Expense payment of $500 $500 $0 $500 $2,000 $13,500 $15,500

September 1, 2009 Case O/S increased to $30,000 $0 $16,500 $16,500 $2,000 $30,000 $32,000

March 1, 2010 Claim settled with additional $24,000 payment $24,000 ($30,000) ($6,000) $26,000 $0 $26,000

Claim reopens with Case O/S of $10,000 for


January 25, 2011 $0 $20,000 $20,000 $26,000 $20,000 $46,000
claim and $10,000 for defense costs
Partial payment of $5,000 for defense
April 15, 2011 $5,000 ($5,000) $0 $31,000 $15,000 $46,000
litigation (w/ corresponding reduction in case)
Final claim payment for an additional $12,000
September 1, 2011 $12,000 ($10,000) $2,000 $43,000 $5,000 $48,000
($2,000 more than in case)
Final defense cost payment for an additional
March 1, 2012 $6,000 ($5,000) $1,000 $49,000 $0 $49,000
$6,000

Column Notes: Additional Dates


(3) = (1) + (2) Report Date: February 20, 2009
(4) = (4) Prior + (1), or Sum of (1) to date Closing Date: March 1, 2010 (initial)
(5) = (5) Prior + (2), or Sum of (2) to date March 1, 2012 (final)
(6) = (6) Prior + (3), or Sum of (3) to date, or (4) + (5) Reopen Date January 25, 2011

The claim occurs on November 15, 2008, but is not reported until February 20, 2009 at which time an initial
case reserve is set. The claim occurred during the policy period, so it is covered by the policy, despite not
being reported until after the policy period is over. There are various other transactions that happen over the
life of the claim. Each is covered in the table above, with the incremental and cumulative effects on paid
claims, reported claims, and case outstanding shown. Please review each, and reference the column notes to
ensure you understand the evolution of the claim amounts over its life.

Often, we will want to know the value of claims at points of interest, for example the end of the year. Below
we summarize the paid claims, case outstanding, and reported claims from the example claim in table above
at the end of each year.

Evaluated Paid Case Reported


as of: Claims Outstanding Claims
12/31/2008 $0 $0 $0
12/31/2009 $2,000 $30,000 $32,000
12/31/2010 $26,000 $0 $26,000
12/31/2011 $43,000 $5,000 $48,000
12/31/2012 $49,000 $0 $49,000

These values correspond to the cumulative values at the end of each year (which is also equal to the
cumulative values at the last transaction date of each year).

Incremental paid claims during a period are the sum of payments in the period, or equivalently the
cumulative paid claims at the end of the period minus cumulative the paid claims at the beginning of the
period.

Similarly, incremental changes in case O/S during a period are the sum of changes in case O/S in the period,
or the case O/S at the end of the period minus the case O/S at the beginning of the period.

Spring 2020 © James Bedford, FCAS Page 16


Estimating Unpaid Claims Using Basic Techniques Chapter 2: The Claims Process

Incremental reported claims within a period can be calculated in the following ways:

1) Reported claims at end of period – reported claims at beginning of period.


2) Incremental paid + case O/S at the end of period - case O/S at the beginning of period.
3) Incremental paid + incremental case O/S.
4) Sum of incremental reported claims during the period.

To illustrate these points, we calculate incremental payments, changes in case O/S, and incremental reported
claims during each calendar year (CY).

Calendar Incremental Incremental Incremental


Year: Payment Case O/S Reported
2009 $2,000 $30,000 $32,000
2010 $24,000 ($30,000) ($6,000)
2011 $17,000 $5,000 $22,000
2012 $6,000 ($5,000) $1,000
Total $49,000 $0 $49,000

For example, the $2,000 incremental payment in CY 2009 is the sum of the $1,500 payment on April 1, 2009
and the $500 payment made on May 1, 2009. The CY 2010 incremental change in case outstanding is negative
$30,000 because the only transaction in 2010 was a payment of $24,000 closing out the claim for $6,000 less
than carried in case reserves ($30,000), resulting in the case reserve dropping from $30,000 to $0. The
$22,000 CY 2011 incremental reported claims can be calculated as the difference in reported claims as of
12/31/2011 ($48,000) minus the reported claims as of 12/31/2010 ($26,000).

Spring 2020 © James Bedford, FCAS Page 17


Estimating Unpaid Claims Using Basic Techniques Introduction to Part 2: Information Gathering

Estimating Unpaid Claims Using Basic Techniques

Introduction to Part 2 - Information Gathering


Four phase approach to estimating unpaid claims:

1) Exploring the data to identify its key characteristics and possible anomalies. Balancing data to other
verified sources should be undertaken at this point.
2) Applying appropriate techniques for estimating unpaid claims.
3) Evaluating the conflicting results of the various methods used, with an attempt to reconcile or explain
the different outcomes. At this point, the projected ultimate amounts are evaluated in contexts outside
their original frame of analysis.
4) Monitoring projections of claim development over subsequent calendar periods. Deviations of actual
development from projected development of counts or amounts are one of the most useful diagnostic
tools in evaluating the accuracy of unpaid claim estimates.

The process of collecting and understanding the data, and other relevant information is so critical that the
authors devote four chapters to the topic. The importance of this topic is often overlooked by the student, but
it is one of the most important parts because with time the mechanics of the loss estimation methods will
become second nature, but properly understanding the process and knowing where to make appropriate
adjustments is where the actuary provides value and why the loss estimation process cannot be automated.

This section will be much more list heavy than later sections. Lists will likely not be tested verbatim on the
exam, but understanding the lists will allow you to quickly formulate your answer to higher level Blooms
questions.

Spring 2020 © James Bedford, FCAS Page 19


Estimating Unpaid Claims Using Basic Techniques Chapter 3: Understanding the Data

Estimating Unpaid Claims Using Basic Techniques

Chapter 3 – Understanding the Types of Data Used in The Estimation of Unpaid Claims
Introduction

It is critical the actuary fully understands what is included in the available data, how data is grouped, the
sources of data, and various other data topics covered in this chapter.

Sources of Data

External data is used in the following situations:

1) Small company: data sets not credible.


2) Systems limitations limit availability of data.
3) Entering a new line of business or region.

External information can be very valuable as a benchmark, especially for the following:

1) Tail development factors.


2) Trend rates.
3) Expected claims ratios.

It is important for actuaries to understand the potential problems with using external data. External data
should be used to supplement internal data, or be used if internal data is unavailable, but in general internal
data is far preferred to external data. The actuary must carefully evaluate the relevance and value of external
data. External data may be misleading or irrelevant due to differences in:

1) Insurance products.
2) Case outstanding and settlement practices.
3) Insurer’s operations or coding.
4) Geographic areas.
5) Mix of business and product types.

Balancing the Homogeneity and Credibility of Data

Claims from different lines of business can behave very differently. Even within a line of business, sub coverages
may behave very differently (e.g., personal auto bodily injury clams may be high severity and long tailed while
property damage may be low severity and paid quickly). Likewise, claims from umbrella or excess insurance
policies will behave much differently than primary (first dollar) insurance.

When estimating unpaid claims the actuary should divide experience into groups that exhibit similar claims
characteristics (e.g., claim development patterns, settlement patterns, severity distributions). When deciding
on data groupings for an unpaid claims analysis the actuary should consider the following characteristics:

1) Volume of claim counts in the resulting groups.


2) Length of time to report the claim once an insured event has occurred (i.e., reporting patterns).
3) Ability to develop an appropriate case outstanding estimate from earliest report through the life of
the claim (i.e., accuracy of case reserves).
4) Length of time to settle the claim once it is reported (i.e., settlement, or payment patterns).
5) Likelihood of claims to reopen once it is settled.
6) Average settlement value (i.e., severity).

The goal is to divide the data into sufficiently homogenous (similar) groupings without compromising the
credibility of the data. There is a balance to be struck here; increasing homogeneity decreases credibility
(because groups will be smaller) and vice versa. If groupings become too homogenous there may be insufficient
data in each group and a detailed analysis may not produce a more accurate estimate of unpaid claims than an

Spring 2020 © James Bedford, FCAS Page 21


Estimating Unpaid Claims Using Basic Techniques Chapter 3: Understanding the Data

analysis on more aggregated (less homogenous) data. The actuary must also consider the efficiency, additional
time, and resources required to analyze more refined groupings (since there will be more groups to analyze)—
the benefits may not justify the time and resources needed.

If relative volume of business between two or more segments within a group are constant, less homogeneity is
needed. It is more appropriate to have less refined groupings if volumes between different groupings are
constant than if relative volumes are changing. Changing mix can have significant impacts on the unpaid claim
estimate, as we will discuss further in Part 3.

Types of Data Used by Actuaries

Claims and Claim Count Data

Some of the more common types of data actuaries rely on to estimate unpaid claims include:

1) Incremental paid claims.


2) Cumulative paid claims.
3) Paid claims on closed claims.
4) Paid claims on open claims.
5) Case outstanding.
6) Reported claims (i.e., sum of cumulative paid claims plus case outstanding).
7) Incremental reported claims.
8) Reported claim counts.
9) Claim counts on closed with payment.
10) Claim counts on closed with no payment.
11) Open claim counts.
12) Reopened claim counts.

Claim Related Expenses

The actuary needs to know if ALAE is included with claims in their data or tracked separately. ALAE and ULAE
do not have uniform definitions across insurers, and occasionally even within an insurer. ALAE is allocated loss
adjustment expense, while ULAE is unallocated, but what is included in each can, and does, vary by company.
In financial reporting, there are hard definitions for expense allocation—LAE is categorized into either Defense
and Cost Containment (DCC) or Adjusting and Other (A&O), which have uniform accounting standards for what
is included in each. Roughly speaking, ALAE and DCC are similar, while ULAE and A&O are similar, and are
sometimes used interchangeably.

Currencies

Ensure your triangles are on the same currency basis. If they are not, convert all to the currency of interest.

Large Claims

The presence of unusually large claims can distort some of the methods used for estimating unpaid claims. The
actuary may decide to exclude the large claims when performing the initial analysis, then add them back in
(sometimes with an additional IBNR provision) at the end.

Recoveries

One type of recovery is the deductible on a third-party insurance policy. In the event of a claim, payments to a
third party are made in full and the insurer will seek reimbursement of the deductible from the insured. Salvage
and subrogation are also very common recoveries. It is important for the actuary to know if the claims data is
net (after considering) or gross (before considering) of recoveries. The actuary also needs to know if recoveries
are tracked separately, and if expected recoveries are considered when setting case reserves.

Spring 2020 © James Bedford, FCAS Page 22


Estimating Unpaid Claims Using Basic Techniques Chapter 3: Understanding the Data

Reinsurance

To properly estimate unpaid claims the actuary must understand the reinsurance program(s) in place. Changes
in reinsurance terms can affect the ability of old development patterns to predict future development patterns
(i.e., claims may develop more or less than before after a change to the reinsurance program has been made).
Actuaries often analyze claims on both a net (after considering reinsurance recoveries) and gross (before
consideration of reinsurance) basis to determine if the implied net and ceded unpaid claim estimates are
reasonable given the reinsurance contracts in place.

The actuary must also understand how ALAE is treated under excess of loss reinsurance policies. Three
treatments of ALAE under reinsurance policies are: ALAE included with claims in the limit, ALAE not covered,
ALAE covered pro rata with loss.

An example will clear up the differences. Let’s examine a reinsurance policy with a $10M attachment point
(amounts excess $10M are covered). If ALAE is included with claims, the claim and ALAE amounts are summed
then applied to the reinsurance policy limits. In the example below claims plus ALAE are $20M, so the $10M
excess the $10M attachment point is covered while the $10M below the attachment point is not. Reinsurance
policies can also exclude ALAE. In the second example the $5M claim amount that is excess the attachment
point is covered while the $10M below and $5M ALAE are not covered. If ALAE is covered pro rata with loss,
ALAE is covered in the same proportion as claims. In our example $5M in claims (of $15M ground up) is
covered, so 5/15 of the $5M ALAE (or $1.67M) is also covered.

Reinsurance Covered by Not Covered by


ALAE Treatment Attachment Claims ALAE Reinsurance Reinsurance
Included w/ claims $10M $15M $5M $10M $10M
Excluded $10M $15M $5M $5M $15M
Pro rata $10M $15M $5M $6.67M $13.33M

Exposure Data

Several techniques require an exposure base. Earned premium is the most common exposure base used. If
possible, the earned premium should be adjusted to the current rate level (on-leveled). Other exposure bases
used by insurers and reinsurers include written premium, policies in force, number of vehicles insured (for
auto insurance), and payroll (for workers’ compensation).

Self-insured organizations generally do not charge premiums, so we must use an alternative exposure base
which is readily observable/available and is closely relates to the potential for claims. Common examples
include payroll for workers’ compensation, number of vehicles for auto liability, property value for property
insurance, number of employees for crime insurance, and sales for general liability.

Terminology and Understanding the Data

It is crucial for the actuary to fully understand the data available and what is included, since terminology may
be different from company to company, by TPA, or even between departments within the same company.

Incurred claims is a prime example of a term that mean different things to different people, and in different
situations. Many actuaries use incurred to mean paid claims + case. In financial reporting, incurred means
ultimate claims, or paid + case + IBNR; after all IBNR is incurred but not reported, so it is by definition incurred.
To avoid confusion, the text uses the term reported claims to mean paid + case (though “case incurred” or
“incurred on reported” are also correct terminology).

The text uses the term unpaid claims rather than reserves. It is very important for the actuary to understand
the basis of the unpaid claims estimate—are we estimating unpaid claims net or gross of reinsurance, net or
gross of recoveries? The actuary also needs to know if case reserves for claims and ALAE are set combined, or
separately.

Spring 2020 © James Bedford, FCAS Page 23


Estimating Unpaid Claims Using Basic Techniques Chapter 3: Understanding the Data

When evaluating claim counts the actuary must understand how claims are counted. If a single accident results
in multiple claims (either multiple claimants, or payments under multiple coverages) will they be counted as a
single claim or multiple claims in the data? This may vary by company.

Data Verification

An analysis can only be as good as the data used— if the data is bad the analysis is going to be wrong regardless.
Actuaries are not required to audit the data they use, but there are several basic requirements they must follow
to ensure the data they use is reliable. Does the data tie out with published financial statements? Is it consistent
with data from prior analyses? Does it appear reasonable, or are there any questionable values? If the answer
to any of those questions cause concern the actuary should look into the issues further and seek clarification
from those providing the data. It is also very important for the actuary to understand the definitions of each
data element, as well as what is included in each. As we have seen, terminology can vary from person to person,
so it is crucial to ensure we understand the basis of our data.

Key Dates

1) Policy effective dates: beginning and ending dates of the policy—the timeframe insured events are
covered.
2) Accident date: the date of covered accident or occurrence.
3) Report date: the date the claim was reported to the insurer. If there is a lag between the claim being
reported and entered into the system, the date it is entered is called the record date. If informed of an
event that may result in a claim prior to a claim actually being filled, this would be the notification date.
4) Accounting date: the date through which liabilities are estimated, for all claims occurring on or before
regardless if they have been reported or not. For example, we may want to estimate unpaid claims as
of 12/31/2016 (the accounting date); we would need to estimate unpaid claims for accidents
occurring on or before 12/31/16 even if not yet reported or known.
5) Valuation date: the date through which known claim data is included, regardless of when the actuary
performs the analysis. The valuation data is often equal to the accounting date, but it can also be before
or after the accounting date.

Data Aggregation

Claims are often aggregated by calendar year, accident year, policy (or underwriting) year, or report year.

Calendar Year (CY)

Calendar year data is transactional—data is grouped by year the transaction occurred. Calendar year claim
payments include all claims paid in a given year regardless of when the accident occurred, policy was written,
or claim reported. In estimating unpaid claims, actuaries generally use calendar year exposures (e.g., calendar
year earned premium). It is not common to group claims by calendar year. CY earned premium can be
calculated as:

CY Written Premium + (Beginning Unearned Premium Reserve - Ending Unearned Premium Reserve)

The advantages of CY data are that it is readily available and is not subject to further development (since it is
based only on transactions during the year).

The disadvantage of CY data is that we cannot use it to estimate development of claims, thus CY claims are
rarely used (CY exposures are commonly used).

Accident Year (AY)

Accident year is by far the most commonly used aggregation for claims data. Claims are grouped by the year of
occurrence, regardless of when the claim is reported, policy was written, or payments are made. For self-
insurers, accident years do not necessarily run from 1/1/XX to 12/31/XX, they may run 7/1/xx to 6/30/XX+1

Spring 2020 © James Bedford, FCAS Page 24


Estimating Unpaid Claims Using Basic Techniques Chapter 3: Understanding the Data

(or another one year period). Insurers may also compile claims data into finer cohorts, such as accident months,
accident quarters, or accident semesters (6 months).

Actuaries commonly use CY exposure data with AY claims data. As covered in the ratemaking text, CY exposures
approximately match AY claims data (PY exposure and claims data will match exactly but have other
weaknesses). For self-insurers, CY exposures do represent an exact match with AY losses, since there is only a
single “policy” written at the beginning of the period (rather than throughout the period like insurers), so
exposures during that period will exactly align with claims occurring during that period.

The advantages of AY aggregation are it is easy to achieve and understand, it represents a shorter timeframe
than policy years (or underwriting years) so we can estimate ultimate claims sooner, and there are numerous
industry benchmarks available on an AY basis.

Disadvantages of AY aggregation are the slight mismatch with CY exposures, the potential for mixing of claims
from policies underwritten or priced at differing levels (an AY will be a mixture of more than one policy year).
AY aggregation can also mask changes in retentions for self-insureds; again, because it will be a mixture of
claims from more than one policy period (e.g., AY18 will be a mixture of PY17 and PY18, which may have
different retentions).

Policy Year (PY) or Underwriting Year

Claims can be grouped by when the policy was written (PY if a primary policy, U/W year if a reinsurance policy)
regardless of when claims occur, are reported, or paid.

The advantage of PY aggregation is PY claims directly match PY exposures. For self-insurers, where only one
policy applies, PY claims are the same at AY claims (since all claims occur during the policy period, assuming
one year policies and the same beginning date). PY aggregation can also be useful if there have been significant
changes in policy characteristics or shifts in types of business written from one year to the next (since AY
aggregation will mix claims from different PYs).

The disadvantage of PY aggregation is the extended timeframe— claims from a policy year for an insurer can
run 24 months (claims can occur on day one of the first policy written, and on day 365 of the last policy written
during the year). This results in a longer time until ultimate claims can reliably be estimated using PY claims
data. It can also be hard to understand or isolate the effects of a large event (e.g., catastrophe). If a catastrophic
event occurs in April, it will affect policies written in the prior year and current year (i.e., the effect will be
spread over the current and prior PY).

Report Year (RY)

Claims can be aggregated by the year claims are reported, regardless of when the claims occurred, are paid, or
when the policy was written. RY aggregation is commonly used with claims-made policies such as medical
malpractice, products liability, or errors and omissions. Claims-made coverages cover claims reported during
the policy period, thus is a natural match with RY aggregation.

The advantage of RY aggregation is the number of claims is fixed by the end of the first year, resulting in more
stable data. Since the number of claims is fixed, any claim development will be development on case reserves
(IBNER) rather than newly reported claims (pure IBNR).

The disadvantage of RY aggregation is it only measures claims development on known claims (case reserves)
but not pure IBNR (unreported) claims.

Spring 2020 © James Bedford, FCAS Page 25


Estimating Unpaid Claims Using Basic Techniques Chapter 4: Meeting with Management

Estimating Unpaid Claims Using Basic Techniques

Chapter 4 – Meeting with Management


Introduction

To accurately estimate unpaid claims and expenses it is very important for the actuary to understand the
insurer’s business, including the internal, external, economic, social, legal, and regulatory environment. The
actuary must be able to correctly interpret patterns and anomalies observed in the data, which could be caused
by many different reasons, including but not limited to changes in the classes of business written, reinsurance
contracts, management philosophy, claims processing, legal and social, and the economy (e.g., inflation). It is
very important that the actuary meets with the management of various departments (especially claims and
underwriting) to understand the business and any changes that have occurred.

Sample Question

The text provides a great (but very extensive) list of sample questions to ask management in various
departments of the insurer. It is important to review the list, but it isn’t necessary to memorize it for the exam—
understanding the general themes I will cover here should be sufficient for the exam.

When meeting with claims department management is it important to understand any changes in claim
settlement speed, report speed, changes in case adequacy (or how case reserves are determined), shifts in
large/small claims, or changes in the rigor of claims defense.

When meeting with underwriting department management it is important to understand any shift in the book
of business, any large risks underwritten, and how policies are priced.

When meeting with the data processing or accounting department management discuss any changes in claim
coding, what data is available and what is included in data definitions, how much historical data is available,
and what processes are in place to ensure data is accurate.

In discussions with ratemaking actuaries you should understand any changes in operations, shifts in the book
of business, if any external data is available or used in ratemaking, ask for filing information, and priced claims
ratios.

If you are a consultant and meeting with in house reserving actuaries you should ask for prior reserve study
reports to review, ask about areas of disagreement between prior studies, and for any additional background
they can provide.

Ask managers of the reinsurance department about the details of the reinsurance programs, especially any
changes in the program or retentions.

When meeting with senior management ask for a company overview, about types of business written, types of
marketing and distribution channels (e.g., direct writer, independent agents, captive agents), and any
organization changes.

Spring 2020 © James Bedford, FCAS Page 27


Estimating Unpaid Claims Using Basic Techniques Chapter 5: The Development Triangle

Estimating Unpaid Claims Using Basic Techniques

Chapter 5 – The Development Triangle


Introduction

A development triangle shows, in tabular form, how the values of cohorts of claims (e.g., AYs, FYs, RYs etc.)
change with time. The following triangle shows the value of each accident year cohort (across the rows) at year
end valuations. For example, as of 12/31/2015 the AY 2013 paid claims are 340, AY 2014 are 320, and AY
2015 are 230.

Accident Table 1 - Paid Claims as of:


Year 12/31/13 12/31/14 12/31/15 12/31/16
2013 200 300 340 350
2014 220 320 345
2015 230 330
2016 245

The above form is common among accountants, and is useful for their purposes, but is of limited use for
actuaries. Actuaries are instead much more interested how claims develop over common timeframes (e.g., from
year one to two, two to three, etc.). This is accomplished by grouping the triangles by ages, rather than
valuations, across the columns. The age is generally measured in months from the start of the cohort to the
valuation date. For example, accident year 2015 starts on 1/1/2015, so the age as of the 12/31/2016 valuation
is 24 months. The following triangle shows the age of each value in the first table.

Accident Table 2 - Ages as of:


Year 12/31/13 12/31/14 12/31/15 12/31/16
2013 12 24 36 48
2014 12 24 36
2015 12 24
2016 12

Now we can convert the first triangle into the form most familiar and useful to actuaries by mapping the
values in the first table to their ages shown in the second table.

Accident Table 3 - Paid Claims at age:


Year 12 24 36 48
2013 200 300 340 350
2014 220 320 345
2015 230 330
2016 245

Development triangles in this form are used by actuaries to identify and analyze patterns in historical data.
Development patterns are inputs into many of the techniques used to estimate unpaid claims. Development
triangles are used by actuaries to analyze many different values, including reported claims, paid claims, claim
adjustment expenses, and claim count data.

The three important dimensions in a development triangle are the rows, the columns, and the diagonals. Each
row in the triangle represents one accident year (in this example; they could also represent each report year,
policy year, etc.). Each column represents the age (or maturity) of development. Each diagonal represents the
values at each successive valuation date, which can be seen when revisiting Tables 1 and 3.

Spring 2020 © James Bedford, FCAS Page 29


Estimating Unpaid Claims Using Basic Techniques Chapter 5: The Development Triangle

Accident Table 1 - Paid Claims as of: Accident Table 3 - Paid Claims at age:
Year 12/31/13 12/31/14 12/31/15 12/31/16 Year 12 24 36 48
2013 200 300 340 350 2013 200 300 340 350
2014 220 320 345 2014 220 320 345
2015 230 330 2015 230 330
2016 245 2016 245

The latest diagonal (orange) represents the values at the latest valuation (12/31/16), while the next interior
diagonal (green) represents to second latest valuation (12/31/15), and so on. The sum of the latest diagonal
minus the sum of the second latest diagonal represents the amount paid during that given timeframe
(calendar year in our example) since each valuation represents a cumulative value.

Building Development Triangles

It is critical for the actuary to fully understand the data available as information systems used by insurers
varies. Assuming we are given the information in the following table (from the text) we can build several
development triangles. There are 15 claims given, including accident date, report date, incremental payments,
and case outstanding balances evaluated at the end of 2005-2008. We will begin by building accident year
development triangles—to do this we have to group the claims by year of accident. The next step is to sum the
incremental payments and case balances over the claims in each accident year (gray rows). The age of each
valuation is the amount of time from the beginning of the accident year to the valuation date. For example, the
$300 paid from AY 2005 in 2007 is 36 months old (12/31/2007 minus 1/1/2005). Similarly, the $230 paid
from AY 2006 in 2008 is also at a maturity of 36 months. I have colored each of these two values blue in the
following table, and in the incremental paid triangle that follows.

We perform a similar calculation to build the case outstanding triangle below. Each green value is at 24 months.
I have color coded several values to assist in understanding the movement from the table to the development
triangles—please work through all of the values to ensure you are comfortable with where each value in the
triangles came from in the table.

We cumulate the incremental paid triangle into a cumulative paid claims triangle by summing up the
incremental payments through each age. The cumulative triangle is more commonly used in estimating unpaid
claims. Reported claims are equal to the cumulative paid claims plus the case outstanding balance at each
valuation. Therefore, the reported claims triangle is the sum of the cumulative paid claims triangle and case
outstanding triangle. From the information contained in the table below we are able to create four claims
triangles.

Spring 2020 © James Bedford, FCAS Page 30


Estimating Unpaid Claims Using Basic Techniques Chapter 5: The Development Triangle

2005 Transactions 2006 Transactions 2007 Transactions 2008 Transactions


Claim Accident Report Total Ending Total Ending Total Ending Total Ending
ID Date Date Payment Case O/S Payment Case O/S Payment Case O/S Payment Case O/S
1 Jan-05-05 Feb-01-05 400 200 220 0 0 0 0 0
2 May-04-05 May-15-05 200 300 200 0 0 0 0 0
3 Aug-20-05 Dec-15-05 0 400 200 200 300 0 0 0
4 Oct-28-05 May-15-06 0 1,000 0 1,200 300 1,200
AY 2005 Σ= 600 900 620 1,200 300 1,200 300 1,200
5 Mar-03-06 Jul-01-06 260 190 190 0 0 0
6 Sep-18-06 Oct-02-06 200 500 0 500 230 270
7 Dec-01-06 Feb-15-07 270 420 0 650
AY 2006 Σ= 460 690 460 920 230 920
8 Mar-01-07 Apr-01-07 200 200 200 0
9 Jun-15-07 Sep-09-07 460 390 0 390
10 Sep-30-07 Oct-20-07 0 400 400 400
11 Dec-12-07 Mar-10-08 60 530
AY 2007 Σ= 660 990 660 1,320
12 Apr-12-08 Jun-18-08 400 200
13 May-28-08 Jul-23-08 300 300
14 Nov-12-08 Dec-05-08 0 540
15 Oct-15-08 Feb-02-09
AY 2008 Σ= 700 1,040

Accident Incremental Paid Claims Accident Case Outstanding


Year 12 24 36 48 Year 12 24 36 48
2005 600 620 300 300 2005 900 1,200 1,200 1,200
2006 460 460 230 2006 690 920 920
2007 660 660 2007 990 1,320
2008 700 2008 1,040

Accident Cumulative Paid Claims Accident Reported Claims


Year 12 24 36 48 Year 12 24 36 48
2005 600 1,220 1,520 1,820 2005 1,500 2,420 2,720 3,020
2006 460 920 1,150 2006 1,150 1,840 2,070
2007 660 1,320 2007 1,650 2,640
2008 700 2008 1,740

Note that the cumulative paid 24 month valuation will include the paid claims through 12 months, plus the
additional amounts paid between 12-24 months, which will include additional payments on claims reported in
the first year, and payments made on new claims reported in the second year.

Using the exact same claim information as the prior example, we can construct report year triangles. Rather
than grouping claims by year of accident, report year triangles are grouped by the year the claim is reported.
We calculate the maturity (age) the same way we did when using accident years—the difference between the
start of the period of the group and the valuation date. After we re-group the claims into report years rather
than accident years, the mechanics of building the triangles are identical. I have again included color coding to
assist in understanding the mapping.

Spring 2020 © James Bedford, FCAS Page 31


Estimating Unpaid Claims Using Basic Techniques Chapter 5: The Development Triangle

2005 Transactions 2006 Transactions 2007 Transactions 2008 Transactions


Claim Accident Report Total Ending Total Ending Total Ending Total Ending
ID Date Date Payment Case O/S Payment Case O/S Payment Case O/S Payment Case O/S
1 Jan-05-05 Feb-01-05 400 200 220 0 0 0 0 0
2 May-04-05 May-15-05 200 300 200 0 0 0 0 0
3 Aug-20-05 Dec-15-05 0 400 200 200 300 0 0 0
Σ= RY 2005 600 900 620 200 300 0 0 0
4 Oct-28-05 May-15-06 0 1,000 0 1,200 300 1,200
5 Mar-03-06 Jul-01-06 260 190 190 0 0 0
6 Sep-18-06 Oct-02-06 200 500 0 500 230 270
Σ= RY 2006 460 1,690 190 1,700 530 1,470
7 Dec-01-06 Feb-15-07 270 420 0 650
8 Mar-01-07 Apr-01-07 200 200 200 0
9 Jun-15-07 Sep-09-07 460 390 0 390
10 Sep-30-07 Oct-20-07 0 400 400 400
Σ= RY 2007 930 1,410 600 1,440
11 Dec-12-07 Mar-10-08 60 530
12 Apr-12-08 Jun-18-08 400 200
13 May-28-08 Jul-23-08 300 300
14 Nov-12-08 Dec-05-08 0 540
Σ= RY 2008 760 1,570
15 Oct-15-08 Feb-02-09 Valuations only provided through 12/31/08, so we wouldn't actually see this claim in our database.
Σ= RY 2009 0 0

Report Incremental Paid Claims Report Case Outstanding


Year 12 24 36 48 Year 12 24 36 48
2005 600 620 300 0 2005 900 200 0 0
2006 460 190 530 2006 1,690 1,700 1,470
2007 930 600 2007 1,410 1,440
2008 760 2008 1,570

Report Cumulative Paid Claims Report Reported Claims


Year 12 24 36 48 Year 12 24 36 48
2005 600 1,220 1,520 1,520 2005 1,500 1,420 1,520 1,520
2006 460 650 1,180 2006 2,150 2,350 2,650
2007 930 1,530 2007 2,340 2,970
2008 760 2008 2,330

Returning to the accident year example, we can also construct paid and reported claim count triangles in
addition to the claims triangles we have already build. Rather than summing the payments and case balances,
we count the number of paid and reported claims (here we define a reported claim as having paid + case > 0).
Looking at the 2005 accident year we see there are two paid claims during 2005, so the paid claim count triangle
has a value of two for AY 2005 at 12 months. There is one additional paid claim from AY 2005 in 2006, so the
paid claim count triangle has a value of three in AY 2005 at 24 months. There are no additional paid claims
from AY in 2007, so the value at 36 months is also three. Finally, we see there is a fourth AY 2008 claim paid in
2008 resulting in a value of four in the paid claim count triangle in AY 2005 at 48 months. I have highlighted
the time of first payment for each AY 2005 in red below to assist in understanding the build of the paid claim
count triangle.

Similarly, we can build the reported claim count triangle by counting the number of claims reported at each
time. Looking at AY 2007 as an example we can see there are three reported claims during 2007 (age 12),
despite there only being two paid claims at that time. In AY 2007 there is an additional claim reported in 2008
(age 24). These values are highlighted in blue below.

Spring 2020 © James Bedford, FCAS Page 32


Estimating Unpaid Claims Using Basic Techniques Chapter 5: The Development Triangle

2005 Transactions 2006 Transactions 2007 Transactions 2008 Transactions


Claim Accident Report Total Ending Total Ending Total Ending Total Ending
ID Date Date Payment Case O/S Payment Case O/S Payment Case O/S Payment Case O/S
1 Jan-05-05 Feb-01-05 400 200 220 0 0 0 0 0
2 May-04-05 May-15-05 200 300 200 0 0 0 0 0
3 Aug-20-05 Dec-15-05 0 400 200 200 300 0 0 0
4 Oct-28-05 May-15-06 0 1,000 0 1,200 300 1,200
AY 2005 Σ= 600 900 620 1,200 300 1,200 300 1,200
5 Mar-03-06 Jul-01-06 260 190 190 0 0 0
6 Sep-18-06 Oct-02-06 200 500 0 500 230 270
7 Dec-01-06 Feb-15-07 270 420 0 650
AY 2006 Σ= 460 690 460 920 230 920
8 Mar-01-07 Apr-01-07 200 200 200 0
9 Jun-15-07 Sep-09-07 460 390 0 390
10 Sep-30-07 Oct-20-07 0 400 400 400
11 Dec-12-07 Mar-10-08 60 530
AY 2007 Σ= 660 990 660 1,320
12 Apr-12-08 Jun-18-08 400 200
13 May-28-08 Jul-23-08 300 300
14 Nov-12-08 Dec-05-08 0 540
15 Oct-15-08 Feb-02-09
AY 2008 Σ= 700 1,040

Accident Paid Claim Counts Accident Reported Claim Counts


Year 12 24 36 48 Year 12 24 36 48
2005 2 3 3 4 2005 3 4 4 4
2006 2 3 3 2006 2 3 3
2007 2 4 2007 3 4
2008 2 2008 3

Accident year is by far the most common grouping actuaries use, but the following groupings can also be used
as they are useful in certain situations:

1) Report Year: group claims by the year the claim is first reported. Commonly used in the analysis of
claims made policies such as medical malpractice or errors and omissions liability.
2) Underwriting Year: group claims by year the reinsurance policy becomes effective. Commonly used by
reinsurers.
3) Treaty Year: group claims by year covered by reinsurance treaty (or contract). Commonly used by
reinsurers.
4) Policy Year: group claims by year the policy is written.
5) Fiscal Year: similar to accident years, but the year aligns with the company’s fiscal year— 7/1 to 6/30
for example. Commonly used by self-insurers who report financials on a fiscal year basis.

For self-insurers the policy year, fiscal year, and accident year are commonly the same. This is because self-
insurers issue policies on a single date (rather than throughout the year as insurers do) and this date often
corresponds to the first day in the fiscal year, and in these cases the accident year is defined over the same
timeframe as the fiscal year (accident years do not necessarily have to run 1/1 to 12/31). In such a situation
the policy year, fiscal year, and accident year would all be the same timeframe.

Triangles can be grouped into time intervals other than annual. Quarterly or semi-annual timeframes are also
commonly used by actuaries. When deciding on a time interval to use, you must consider both the credibility
and stability of the data in each group. Seasonality would also need to be considered if using timeframes other
than annual. The number of rows does not necessarily need to equal the number of columns in a development
triangle, though it is common that they do (e.g., we could have AYs evaluated semi-annually resulting in twice
as many columns as rows).

Spring 2020 © James Bedford, FCAS Page 33


Estimating Unpaid Claims Using Basic Techniques Chapter 5: The Development Triangle

The minimum number of periods required in the triangle varies by line of business, state, and type of data.
Ideally, we would like to have development periods available until we expect no further development. For long
tailed lines such as workers’ compensation this might require many periods. For short tailed lines, such as auto
physical damage this might be very few periods. The number of periods can also depend on the laws of the
regions where business is written (which can cause development to be shorter or longer). The type of data in
the triangle also matters since claims are paid slower than they are reported, so a paid claims triangle would
require more periods than the reported claims triangle to reach completion.

Unless otherwise noted the terms reported claims, paid claims, reported claim counts, and closed claim count
triangles will refer to the cumulative triangles rather than incremental triangles. In situations we use
incremental triangles they will be labeled as such.

Spring 2020 © James Bedford, FCAS Page 34


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Estimating Unpaid Claims Using Basic Techniques

Chapter 6 – The Development Triangle as a Diagnostic Tool


Introduction

XYZ is a hypothetical personal auto insurance company that is examined throughout the remaining chapters.
This chapter uses them as an example for running various diagnostics on the data to understand how changes
in operations and the external environment influence claims data. In practice we will generally undertake the
diagnostics testing after having discussions with management and other departments (as discussed in Chapter
4). When performing diagnostics testing we will be looking to confirm any changes in the data we have learned
of from management, and keeping an eye out for any other changes or trends not discussed. If we do not
observe the changes that management has reported implementing, we should discuss this with them.
Sometimes the changes have just not worked their way through completely and will show up in the data with
time. Other times the changes may not be working as intended, or there may be meeting resistance from
employees within the organization hesitant to change.

Based on our discussion with XYZ’s management we are aware of the following internal changes:

1) They hired a new SVP of Claims whose main priority is to carry adequate case reserves, resulting in
much greater case strength than in prior years (we are not told the hire date, but we will see later the
changes begin emerging three years ago).
2) They have implemented a new information system in the past three years, significantly speeding up
claims reporting and settlement speed.

We also learned of the following external changes to their personal automobile market:

1) Major tort reforms resulting in caps on awards.


2) New pricing restrictions, including required rate level changes, for all insurers operating in the region.

Due to these changes XYZ’s management has decided to reduce their market presence. When performing
diagnostics on the data we hope to observe the effects of these changes. We will take the knowledge gained
from discussion with management and through data diagnostics to determine which techniques will result in
the most accurate estimate of unpaid claims given the company’s situation.

Detailed Example

We begin by examining earned premium and rate change history. We note increases in rate levels and earned
premiums in 2002-2005, followed by significant drops in earned premiums and rate levels in 2007-2008. This
is consistent with the pricing restrictions discussed with management, and their decision to reduce their
market presence. Column (6) is not shown in the text, but can be calculated given our ratemaking knowledge—
it will be used in future diagnostics.

Spring 2020 © James Bedford, FCAS Page 35


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Table 1 - Summary of Earned Premium and Rate Changes


Earned Average Cumulative Annual Rate Level
Calendar Premiums Rate Level Average Exposure Adj. Factor
Year ($000) Change Rate Level Change to 2008 Level
(1) (2) (3) (4) (5) (6)
2002 61,183 0.0% 0.914
2003 69,175 5.0% 5.0% 7.7% 0.870
2004 99,322 7.5% 12.9% 33.6% 0.810
2005 138,151 15.0% 29.8% 21.0% 0.704
2006 107,578 10.0% 42.8% -29.2% 0.640
2007 62,438 -20.0% 14.2% -27.5% 0.800
2008 47,797 -20.0% -8.6% -4.3% 1.000
(4) = [1 + (4) Prior] x [1 + (3)] - 1.0
(5) = [(2) / (2) Prior] / [1 + (3)] - 1.0
(6) = [1 + (4) 2008] / [1 + (4)]

Reported and paid claims are the two most common data triangles used by actuaries. As such it is important to
perform a visual inspection of each. When reviewing a claim triangle, we are generally most interested in
differences at the same age of development (down the columns)—we would expect average claims (Tables 12
and 13) to grow by inflation with minimal disruption, in a stable environment. Looking at the triangle of claims
ratios is also very common. Below we look at the reported and paid claims triangles, as well as reported claims
divided by earned premium, and reported claims divided by on-level earned premium.

Table 2 - Reported Claim Development Triangle


Accident Reported Claims ($000) as of (months)
Year 12 24 36 48 60 72 84
2002 12,811 20,370 26,656 37,667 44,414 48,701 48,169
2003 9,651 16,995 30,354 40,594 44,231 44,373
2004 16,995 40,180 58,866 71,707 70,288
2005 28,674 47,432 70,340 70,655
2006 27,066 46,783 48,804
2007 19,477 31,732
2008 18,632

Table 3 - Paid Claim Development Triangle


Accident Paid Claims ($000) as of (months)
Year 12 24 36 48 60 72 84
2002 2,318 7,932 13,822 22,095 31,945 40,629 44,437
2003 1,743 6,240 12,683 22,892 34,505 39,320
2004 2,221 9,898 25,950 43,439 52,811
2005 3,043 12,219 27,073 40,026
2006 3,531 11,778 22,819
2007 3,529 11,865
2008 3,409

Spring 2020 © James Bedford, FCAS Page 36


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Table 4 - Ratio of Reported Claims to Earned Premium


Accident Ratio of Reported Claims to Earned Premium as of (months)
Year 12 24 36 48 60 72 84
2002 0.209 0.333 0.436 0.616 0.726 0.796 0.787
2003 0.140 0.246 0.439 0.587 0.639 0.641
2004 0.171 0.405 0.593 0.722 0.708
2005 0.208 0.343 0.509 0.511
2006 0.252 0.435 0.454
2007 0.312 0.508
2008 0.390
Table 4 = Table 2 / Table 1, Column (2)

Table 5 - Ratio of Reported Claims to On-Level Earned Premium


Accident Ratio of Reported Claims to On-Level Earned Premium as of (months)
Year 12 24 36 48 60 72 84
2002 0.229 0.364 0.477 0.674 0.794 0.871 0.862
2003 0.160 0.282 0.504 0.674 0.735 0.737
2004 0.211 0.500 0.732 0.892 0.874
2005 0.295 0.488 0.723 0.726
2006 0.393 0.679 0.709
2007 0.390 0.635
2008 0.390
Table 4 = Table 2 / [Table 1, Column (2) x Table 1, Column (6)]

Reviewing these first four diagnostics triangles we make several observations and raise several questions:

1) AY 2003 reported claims are very low through 24 months (yellow). From table 1 we know exposure
grew 7.7% in 2003, so seeing a significant drop in reported claims is notable. Paid claims in the same
timeframe are down as well, so the drop in reported is at least partially due to the drop in paid, but
could also be driven by case reserves (as we examine later).
2) Reported claims in AYs 2004-2006, particularly after 12 the month valuation (green), are up
significantly. Exposures are up, but not to the extent of reported claims. We don’t observe the same
significant jump in the paid claims triangle, so the jump must be due to case reserves, which is
consistent with management’s assertion that case adequacy has improved.
3) In AYs 2007 and 2008 we observe a sharp drop in reported claims at 12 and 24 months (blue). After
reviewing the claims ratios in those years, we note the drop in nominal reported claims volume is due
to a comparable drop in exposures and premiums (because we do not see a similar drop in claims
ratios).

The actuary can examine the consistency of paid claims to reported claims by looking at a triangle of paid to
reported claims. Such a triangle is useful in looking for changes in case outstanding adequacy or settlement
patterns. If case reserves are becoming more adequate the ratio would go down because case is contained in
reported claims in the denominator. If settlement patterns are increasing, we would expect the ratio to increase
at earlier ages since more will be paid earlier and paid claims are in the numerator. Since this diagnostic is a
ratio, a change in either could be causing the differences, and no change in the ratio does not necessarily imply
there are no changes in either as there could be offsetting changes in each.

Spring 2020 © James Bedford, FCAS Page 37


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Table 6 - Ratio of Paid Claims-to-Reported Claims


Accident Ratio of Paid Claims-to-Reported Claims as of (months)
Year 12 24 36 48 60 72 84
2002 0.181 0.389 0.519 0.587 0.719 0.834 0.923
2003 0.181 0.367 0.418 0.564 0.780 0.886
2004 0.131 0.246 0.441 0.606 0.751
2005 0.106 0.258 0.385 0.566
2006 0.130 0.252 0.468
2007 0.181 0.374
2008 0.183
Table 6 = Table 3 / Table 2

Again, we are most interested in changes down columns—differences by AY at the same maturity, which
indicates a change in development patterns. At 12 months of development we observe the ratio decreases from
AY 2004-2006, but returns to its previous levels in AY 2007-2008. Based on our discussions with management,
we expect to see an increase in closure (payment) speed, but also expect to see an increase in case adequacy.
These two changes may be offsetting each other as we do not see any definitive trends in this triangle.

We can examine the triangle of on-level paid claims ratios. An increase in on-level paid claims ratios at earlier
ages could indicate that results are deteriorating, or alternatively that claims are being paid quicker.

Table 7 - Ratio of Cumulative Paid Claims to On-Level Earned Premium


Accident Ratio of Cumulative Paid Claims to On-Level Earned Premium as of (months)
Year 12 24 36 48 60 72 84
2002 0.041 0.142 0.247 0.395 0.571 0.727 0.795
2003 0.029 0.104 0.211 0.380 0.573 0.653
2004 0.028 0.123 0.323 0.540 0.657
2005 0.031 0.126 0.278 0.412
2006 0.051 0.171 0.331
2007 0.071 0.238
2008 0.071
Table 7 = Table 3 / [Table 1, Column (2) x Table 1, Column (6)]

We do observe an increase in on-level paid claims ratios in the most recent accident years through 24
months, so we will examine additional diagnostics trying to determine if this increase is due to deteriorating
results, or an increase in payment speed. We will need triangle of reported and closed claim counts to
accomplish this.

Table 8 - Reported Claim Count Development Triangle


Accident Reported Claim Counts as of (months)
Year 12 24 36 48 60 72 84
2002 1,342 1,514 1,548 1,557 1,549 1,552 1,554
2003 1,373 1,616 1,630 1,626 1,629 1,629
2004 1,932 2,168 2,234 2,249 2,258
2005 2,067 2,293 2,367 2,390
2006 1,473 1,645 1,657
2007 1,192 1,264
2008 1,036

Spring 2020 © James Bedford, FCAS Page 38


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Table 9 - Closed Claim Count Development Triangle


Accident Closed Claim Counts as of (months)
Year 12 24 36 48 60 72 84
2002 203 607 841 1,089 1,327 1,464 1,523
2003 181 614 941 1,263 1,507 1,568
2004 235 848 1,442 1,852 2,029
2005 295 1,119 1,664 1,946
2006 307 906 1,201
2007 329 791
2008 276

It is important for the actuary to understand how claims are counted by the insurer (or TPA). Are claims with
no payments counted? Are re-opened claims coded as a new claim? In this example XYZ does not include claims
close with no payments (CNP) in the closed claim counts. They define reported claims as the sum of closed with
pay counts and open claims with positive case reserves, thus reported claims exclude CNP counts as well. It is
also important that the definition of claim counts is consistent throughout the triangle— a change in definition
can have significant effects on the results of the diagnostics analyses. If using external benchmarks, the actuary
need to be aware of any difference in definition of claim counts between the company and the benchmark.

We observe that reported counts are highest in AYs 2004-2005, which is generally consistent with the reported
claim amounts in Table 2.

Examining the ratio of closed counts to reported counts is a very useful metric which can indicate a change in
the settlement rate of claims. Blips in this ratio can be caused by one time internal or external events (e.g.,
systems outages or catastrophes)— we are more interested in persistent changes or trends.

Table 10 - Ratio of Closed-to-Reported Claim Counts


Accident Ratio of Closed-to-Reported Claim Counts as of (months)
Year 12 24 36 48 60 72 84
2002 0.151 0.401 0.543 0.699 0.857 0.943 0.980
2003 0.132 0.380 0.577 0.777 0.925 0.963
2004 0.122 0.391 0.645 0.823 0.899
2005 0.143 0.488 0.703 0.814
2006 0.208 0.551 0.725
2007 0.276 0.626
2008 0.266
Table 10 = Table 9 / Table 8

We observe an increase in the ratio of closed to reported claim counts over the most recent diagonals. This
observation is consistent with our discussions that indicated their new system results in quicker claim
settlement (across all accident years, not just new accident years). We must examine the effects of this shift in
settlement pattern, especially its effect on the types of claims that are settled. It is easier for insurers to increase
the settlement speed of small claims than large claims (because large claims more often include third parties
whose timing is outside the insurers control). If the increase in settlement speed is due to settling small claims
quicker we would expect to see a decrease in the size of claims settled in earlier ages (or at minimum, an
increase smaller than expected due to inflation). We will examine this next.

Spring 2020 © James Bedford, FCAS Page 39


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Table 12 - Average Reported Claim Development Triangle


Accident Average Reported Claims as of (months)
Year 12 24 36 48 60 72 84
2002 9,546 13,454 17,220 24,192 28,673 31,380 30,997
2003 7,029 10,517 18,622 24,966 27,152 27,239
2004 8,797 18,533 26,350 31,884 31,128
2005 13,872 20,686 29,717 29,563
2006 18,375 28,440 29,453
2007 16,340 25,104
2008 17,985
Table 12 = Table 2 x 1,000 / Table 8 = Reported Claims / Reported Counts

When reviewing a triangle of average values, we expect claims to grow by inflation— faster or slower growth
could indicate other changes are occurring in the data. The growth in average reported claims above are higher
than we would expect from inflation alone, which may be due to higher claim payments or higher case reserves
(or potentially from changes in policy limits purchased, mix of business, types of policyholders insured— think
loss trend from the ratemaking text).

Table 13 - Average Paid Claim Development Triangle


Accident Average Paid Claims as of (months)
Year 12 24 36 48 60 72 84
2002 11,419 13,068 16,435 20,289 24,073 27,752 29,177
2003 9,630 10,163 13,478 18,125 22,896 25,077
2004 9,451 11,672 17,996 23,455 26,028
2005 10,315 10,920 16,270 20,568
2006 11,502 13,000 19,000
2007 10,726 15,000
2008 12,351
Table 13 = Table 3 x 1,000 / Table 9 = Paid Claims / Closed Counts

The trends in average paid claims are much lower than the trends in average reported claims, which tells us
the difference must be mostly due to increases in average case reserves (because the difference between
reported and paid claims is only case reserves). We also observe the average paid claims on the latest diagonal
for the latest three years are the highest value in each column. We should ask the claims department if there
has been a change in the type of claim being settled at these ages (i.e., larger claims earlier than before).

We should note the numerator of the average paid claims includes all paid claims, including partial payments
on claims that are still open, while the denominator only includes closed claims. This results in a small
mismatch, which overstates the average paid claims because full and partial payments are included in the
numerator, but only full (closed) counts are included in the denominator.

Spring 2020 © James Bedford, FCAS Page 40


Estimating Unpaid Claims Using Basic Techniques Chapter 6: The Triangle as a Diagnostic Tool

Table 14 - Average Case Outstanding Development Triangle


Accident Average Case Outstanding as of (months)
Year 12 24 36 48 60 72 84
2002 9,212 13,713 18,153 33,274 56,167 91,727 120,387
2003 6,634 10,734 25,647 48,766 79,721 82,836
2004 8,706 22,941 41,561 71,204 76,319
2005 14,464 29,994 61,546 68,984
2006 20,184 47,368 56,985
2007 18,480 42,002
2008 20,030
Table 14 = (Table 2 - Table 3) x 1,000 / (Table 8 - Table 9)
= (Reported Claims - Paid Claims) / (Reported Counts - Closed Counts)
= Case Reserves / Open Counts

The average case outstanding triangle, as shown above, is one of the most important diagnostic tools for testing
changes in case outstanding. Increases above the rate of inflation signal case strengthening, while decreasing
trends, or trends below inflation indicate decreasing case strength (all else equal).

In the example above we observe a very high trend, and more notably a significant jump in the latest three
calendar years (latest three diagonals, in yellow). This observation is consistent with XYZ’s management’s claim
that case adequacy has increased in recent years. However, this result is not necessarily definitive in isolation.
If smaller claims are simply settling faster, the average case outstanding would increase simply because large
claims make up a larger proportion of the open claims than before, despite case adequacy not changing. In this
example, we observed an increase in average paid claims, which indicates there has not been a shift in smaller
claims settling earlier, so we can be fairly confident case adequacy has increased as XYZ’s management believes.

Conclusion

XYZ has clearly experiences many changes to their book of business in recent years. Running these diagnostics
help us understand these changes better, and will allow us to understand how these changes affect the accuracy
of each methodology covered in the following chapters. This XYZ example will continue to be used through the
remainder of the text.

Spring 2020 © James Bedford, FCAS Page 41


Estimating Unpaid Claims Using Basic Techniques Introduction to Part 3: Basic Techniques

Estimating Unpaid Claims Using Basic Techniques

Introduction to Part 3 – Basic Techniques for Estimating Unpaid Claims


Introduction

As discussed earlier, ultimate claim costs consist of paid claims, case reserves, and IBNR (case reserves and
IBNR being “unpaid” claims). The breakdown between each will vary greatly by line of business and age of
maturity.

For short tailed lines, such as automobile physical damage and property insurance, paid claims and case
reserves will make up a high proportion of ultimate claims at early maturities. IBNR makes up a much higher
proportion of ultimate claims for long tailed lines, such as medical malpractice, workers’ compensation, or
general liability. The proportion of IBNR to ultimate claims will decrease as the segment being analyzed
matures (e.g., a long-tailed line of business at 12 months of maturity may be primarily IBNR—at 72 months of
maturity IBNR may still make up a large portion of ultimate, but it will certainly be less than at 12 months).

Actuarial Judgment

In their classic paper (covered in Chapter 13) Berquist and Sherman make the following statement regarding
actuarial judgment:

“While specific guidelines for reserve adequacy testing may be established and specific examples of
an actuarial approach to the testing of loss reserves may be offered for particular situations, loss
reserving cannot be reduced to a purely mechanical process or to a “cookbook” of rules and methods.
The utilization and interpretation of insurance statistics requires an intimate knowledge of the
insurance business as well as the actuary’s ability to quantify complex phenomena which are not
readily measurable. As in the case of ratemaking, while certain general methods are widely accepted,
actuarial judgment is required at many critical junctures to assure that reserve projections are
neither distorted nor biased.”

Actuarial judgment is critical in determining the type of data to use, assessing the effects of changes in insurers
operations, adjusting claims data and methodologies for known changes/events, understanding the
strengths/weaknesses of the methods used, and in making final selections based on the methods that will
provide the best estimates in the given situation.

Examples

The text uses many examples to illustrate the methods, covered in the exhibits (I have provided these in Excel).
There are several examples that are carried through most of the chapters. XYZ Insurer is a hypothetical private
passenger automobile insurers that has experienced many internal and external changes and will be reviewed
in most chapters.

The text also examines the U.S. Automobile Industry aggregate results throughout most chapters, as well as
various other specific examples in each chapter.

Spring 2020 © James Bedford, FCAS Page 43


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Estimating Unpaid Claims Using Basic Techniques

Chapter 7 – Development Technique


Introduction

The development technique (a.k.a. chain ladder technique) is the most basic and commonly used method for
estimating unpaid claims. The development method is based on the assumption you can use information on
how claims have developed over time in the past to predict how claims will develop over time in the future. We
will use information from the top half of the data triangle to predict how claims in the bottom half of the triangle
will develop. We will have observed age to age factors (dark shades below) from which we will select age to
age factors (light shades below) to develop the latest values to ultimate (the last column below signifies our tail
factor selection).

The development method consists of seven basic steps:

1) Compile claims data in a development triangle (as covered in Chapter 5).


2) Calculate age-to-age factors (a.k.a. link ratios or report-to-report factors).
3) Calculate averages of the age-to-age factors.
4) Select claim development factors (CDFs).
5) Select a tail factor.
6) Calculate cumulative claim development factors.
7) Project ultimate claims.

Assuming both sets of data are available, we will perform the same analysis twice, once for paid and once for
reported claims. The development method can also be used to estimate ultimate claim counts.

Note: I will use “technique” and “method” interchangeably. Also, for simplicity my examples in this and the
following chapters will discuss triangles grouped by accident years—the concepts also hold true for other
groupings discussed in Chapter 5 such as report years, underwriting year, treaty year, policy year, or fiscal year.

Spring 2020 © James Bedford, FCAS Page 45


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Step 1 - Compile claims data into a development triangle

This is covered in detail in chapter 5. It is critical that the actuary understands what is included in the triangle
(e.g., Net or Gross of reinsurance, including or excluding ALAE, Net or Gross of salvage and subrogation, etc.).

Step 2 - Calculate age-to-age factors (a.k.a. link ratios, age-age factors, or report-to-report factors)

Age-to-age factors measure the relative change from one valuation date to the next. For example, the 12-24
factor is the claims at 24 months divided by claims at 12 months. Due to the shape of a data triangle, the triangle
of age-to-age factors will have one less row and column than the data triangle (i.e., the latest diagonal will not
have an age-age factor since the following period has not yet occurred). In our example the AY 2011 36-48
month factor is 52,094/47,768=1.091.

Accident Paid Claims as of (months)


Year 12 24 36 48 60 72 84 96
2008 18,539 33,231 40,062 43,892 45,897 46,765 47,221 47,447
2009 20,410 36,091 43,259 47,159 49,209 50,162 50,626
2010 22,121 38,976 46,389 50,562 52,735 53,740
2011 22,992 40,096 47,768 52,094 53,363
2012 24,093 41,795 49,904 54,353
2013 24,084 41,400 49,070
2014 24,370 41,490
2015 25,101
,
,
Accident Age-to-Age Factors
Year 12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
2008 1.792 1.206 1.096 1.046 1.019 1.010 1.005
2009 1.768 1.199 1.090 1.043 1.019 1.009
2010 1.762 1.190 1.090 1.043 1.019
2011 1.744 1.191 1.091 1.024
2012 1.735 1.194 1.089
2013 1.719 1.185
2014 1.703

Spring 2020 © James Bedford, FCAS Page 46


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Step 3 - Calculate averages of the age-to-age factors (to assist in making factor selections)

To assist in making factor selections the actuary will calculate various averages of the age-age factors. Examples
covered in the text include (for each, if the number of years available at time t is less than N, use the number of
years available):

1) Simple average: straight average of the latest N year’s age-age factors.

Accident Age-to-Age Factors


Year 12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
2008 1.792 1.206 1.096 1.046 1.019 1.010 1.005
2009 1.768 1.199 1.090 1.043 1.019 1.009
2010 1.762 1.190 1.090 1.043 1.019
2011 1.744 1.191 1.091 1.044
2012 1.735 1.194 1.089
2013 1.719 1.185
1.191 + 1.194 + 1.185
2014 1.703
3
12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
Simple Average
All Year 1.746 1.194 1.091 1.044 1.019 1.009 1.005
Latest 5 1.732 1.192 1.091 1.044 1.019 1.009 1.005
Latest 3 1.719 1.190 1.090 1.043 1.019 1.009 1.005

2) Medial average: straight average of the latest N year’s age-age factors excluding the highest and lowest
values.

Accident Age-to-Age Factors


Year 12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
2008 1.792 1.206 1.096 1.046 1.019 1.010 1.005
2009 1.768 1.199 1.090 1.043 1.019 1.009
2010 1.762 1.190 1.090 1.043 1.019
2011 1.744 1.191 1.091 1.044
2012 1.735 1.194 1.089
2013 1.719 1.185 1.043 + 1.044
2014 1.703 2
1.744 + 1.735 + 1.719
3
12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84* 84-96* 96-To Ult
Medial Average
Latest 5 1.733 1.192 1.090 1.044 1.019 1.009 1.005

*The medial average for two data points is the simple average, and the medial average for one data point is the value of that data point.

Spring 2020 © James Bedford, FCAS Page 47


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

3) Volume weighed average: volume weighted average of the latest N year’s age-age factors, using claims at
the first age as the weight. It is mathematically equivalent, and much quicker, to calculate the N year
weighted average as the sum of the N years claims at the ending age divided by the sum of those same AY’s
claims at the beginning age. Note, we are using claims below, not age-age factors as before. Also note, in
each case we do not use the most recent value in the first age (e.g., 25,101 in the 12-month column in the
first example below) since we need the AYs used to line up, and the latest value will not have a
corresponding value at the next age.

Accident Paid Claims as of (months)


Year 12 24 36 48 60 72 84 96
2008 18,539 33,231 40,062 43,892 45,897 46,765 47,221 47,447
2009 20,410 36,091 43,259 47,159 49,209 50,162 50,626
2010 22,121 38,976 46,389 50,562 52,735 53,740
2011 22,992 40,096 47,768 52,094 54,363
2012 24,093 41,795 49,904 54,353
2013 24,084 41,400 49,070
2014 24,370 41,490
2015 25,101
41,795 + 41,400 + 41,490
24,093 + 24,084 + 24,370

12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult


Volume-weighted Average
All Year 1.744 1.194 1.091 1.044 1.019 1.009 1.005
Latest 5 1.732 1.192 1.091 1.044 1.019 1.009 1.005
Latest 3 1.719 1.190 1.090 1.043 1.019 1.009 1.005

4) Geometric average: the Nth root of the product of the N latest year’s age-age factors (∏ )

Accident Age-to-Age Factors


Year 12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
2008 1.792 1.206 1.096 1.046 1.019 1.010 1.005
2009 1.768 1.199 1.090 1.043 1.019 1.009
2010 1.762 1.190 1.090 1.043 1.019
/
(1.010∗1.009)
2011 1.744 1.191 1.091 1.044
2012 1.735 1.194 1.089 /
(1.046∗1.043∗1.043∗1.044)
2013 1.719 1.185
2014 1.703 (1.774∗1.735∗1.719∗1.703) /

12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult


Geometric Average
All Year 1.746 1.194 1.091 1.044 1.019 1.009 1.005
Latest 4 1.725 1.190 1.090 1.044 1.019 1.009 1.005

Spring 2020 © James Bedford, FCAS Page 48


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

If there has been changes in the insurer’s internal and/or external environment the actuary should give more
weight to the most recent data to reflect these changes. When determining how many years to include in the
averages the actuary must balance stability and responsiveness. Stability increases as more years are used
while responsiveness increases as less years are used.

On the exam we do not want to waste time calculating all of these averages. You will review the age-age factors
and judgmentally select the best average to use. You will only need to calculate the selected average and justify
your selection (e.g., using the last 5 years because of reason X, or excluding year Y because of reason Z). There
will likely to be more than one justifiable selection—unless otherwise asked for you should likely use a straight
average or weighted average over an appropriate range of years (excluding outliers if necessary).

Step 4 – Select claim development factors

The selected age-age factors represent the expected future development of claims in each time interval. They
are used to project observed claims from one timeframe to the next, and eventually to their ultimate value.

When selecting claim development factors, the actuary should look for the following characteristics:

1) Smooth downward progression of age-age factors across development periods (across ages).
-This is not a definitive rule, but is a common characteristic of most development factors within most
lines of business (i.e., less will generally be paid in the 3rd period than 2nd period, less in the 4th than
3rd period, etc.)
2) Stable age-age factors within development periods (down the columns).
-In general, less mature age-age factors are often less stable since the claims professional has the
least amount of information about the circumstances, potential damages, and injuries occurred of
the event. There should be no clear trends down columns, even at less mature ages (i.e., early
volatility should be random).
3) Credibility of the experience.
-If the insurers own historical experience data is limited the actuary may use experience from similar
lines with similar claims handling practices within the company or use industry development factors
(outside the company) if considered comparable (e.g., no large differences in claims practices, policy
coverages, underwriting, geographic mix, and policyholder deductibles and/or limits).
4) Changes in patterns.
-Review age-age factors for changes that may suggest changes in internal operations or external
environment (as discussed in Chapter 6), such as trends in age-age factors down the columns.
5) Applicability of historic patterns to the future.
-Consider changes in the book of business, insurer operations, and external factors both observed in
the historic period and those that may not have shown up in the claims experience yet.

The actuary should also consider the prior year’s selected age-age factors when making this year’s selections.
When balancing the conflicting goals of stability and responsiveness, having a reference point is useful when
deciding the extent by which to change the selected factors. The actuary can also compare their prior selections
to the age-age development actually observed in the latest period.

It is important to understand that there may be several reasonable selections and there is almost never a
“correct” age-age selection.

Spring 2020 © James Bedford, FCAS Page 49


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Step 5 - Select tail factor

The tail factor is used to develop claims beyond the oldest evaluation age in the triangle, to ultimate. The point
where no further development is expected, thus no tail factor is required, varies greatly by line of business.

Example approaches actuaries can use to estimate tail factors include:

1) Reliance on industry benchmarks.


2) Fit a curve to the selected (or observed) development factors.
3) If reported claims are assumed to be at ultimate value, use the ratio of reported to paid claims at the
oldest age as the tail factor of the paid triangle (since reported claims are assumed to be at ultimate
value, the ratio measures paid development to ultimate value). The reported tail will be 1.00 since it is
assumed to be at ultimate already.

Reported Losses
Assumed to be at
72 84 96
Example of Approach #3

ultimate value.
50,812 49,102 48,633
54,421 52,660 Reported Tail Factor: 1.00
58,284

Paid Losses
72 84 96 Paid Tail Factor:
46,765 47,221 47,447 48,633
50,162 50,626 = 1.025
47,447
53,740

Tail factors are critical and influence the projected development of all accident years in the experience period.
Tail factors can also be difficult to estimate due to limited (or relevant) data, as well as changing conditions
(e.g., the tail factor may increase due to trends in cumulative injury costs or shifts to unlimited medical benefits
while the tail factor may decrease if lump sum settlements increase because more would be paid earlier).

Step 6 - Calculate cumulative claim development factors (CDFs)

Cumulative claim development factors (a.k.a. age-to-ultimate, age-ult, CDFs) project claims to ultimate value.
To develop the AY 2012 claims (which are at 48 months’ maturity) to ultimate value in our example above we
could first multiply by the 48-60 month factor to bring the value to the 60 month level, then 60-72 factor, then
72-84 factor, then 84-96 factor, then 96-ult to finish at the ultimate value. Alternatively, we can take the product
of the 48-60 through 96-ult factors to get the 48-ult factor and multiply claims at 48 months by that one factor.

To calculate the age-ult factor at each age you first start with the tail factor; the cumulative tail factor is simply
equal to the selected tail factor since this is the factor to develop losses from the latest point in the triangle to
ultimate value. To get the age-ult factor at the preceding age (84 in our example above) you multiply the tail
factor (96-ult factor) by the 84-96 factor, resulting in the 84-ult factor. The 84-ult factor can then be multiplied
by the 72-84 factor to get the 72-ult factor, and so on, all the way back to the 12-ult factor.

Cumulative paid claim development factors are typically larger than the corresponding cumulative reported
claim development factor at the same age since claims are reported faster than they are paid.

The selected age-age factors will be used to develop claims to ultimate levels. Below you can see the selections
in our example. I have also filled in the bottom half of the age-age triangle—you wouldn’t actual do this on the
exam or in practice, but it gives you a visual representation of the method.

Spring 2020 © James Bedford, FCAS Page 50


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Accident Age-to-Age Factors


Year 12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
2008 1.792 1.206 1.096 1.046 1.019 1.010 1.005 1.025
2009 1.768 1.199 1.090 1.043 1.019 1.009 1.005 1.025
2010 1.762 1.190 1.090 1.043 1.019 1.009 1.005 1.025
2011 1.744 1.191 1.091 1.044 1.019 1.009 1.005 1.025
2012 1.735 1.194 1.089 1.044 1.019 1.009 1.005 1.025
2013 1.719 1.185 1.091 1.044 1.019 1.009 1.005 1.025
2014 1.703 1.192 1.091 1.044 1.019 1.009 1.005 1.025
2015 1.732 1.192 1.091 1.044 1.019 1.009 1.005 1.025

12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult


Selected 1.732 1.192 1.091 1.044 1.019 1.009 1.005 1.025
CDF to Ult. 2.490 1.438 1.207 1.106 1.060 1.040 1.030 1.025

1.009*1.030 1.005*1.025

Step 7 - Project ultimate claims

Using the development method, our ultimate claim estimate for each accident year is equal to the claims at the
latest valuation (on the latest triangle diagonal) times the cumulative claim development factor at that accident
years’ age (as calculated in Step 6). The overall total ultimate claim estimate is the sum of each accident years’
ultimate claim estimate.

The projected unpaid losses are equal to the projected ultimate claims minus the actual paid claims. Similarly,
the projected IBNR is the projected ultimate claims minus the actual reported claims, or equivalently the
projected unpaid claims minus the actual case outstanding (as covered in Chapter 1).

Latest diagonal of the


paid claims triangle.
Paid Projected
Accident Claims CDF Ultimate
Year Age @12/31/15 to Ult Claims
(1) (2) (3) (4) (5)=(3)*(4)

2008 96 47,447 1.025 48,633


2009 84 50,626 1.030 52,139
2010 72 53,740 1.040 55,872
2011 60 54,363 1.060 57,601
2012 48 54,353 1.106 60,115
2013 36 49,070 1.207 59,208
2014 24 41,490 1.438 59,661
2015 12 25,101 2.490 62,505

Total 376,190 455,734

Spring 2020 © James Bedford, FCAS Page 51


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Reporting and Payment Patterns

We can derive implied reporting patterns from the cumulative reported claims development factors and
implied payment patterns from the cumulative paid claims development factors.

1
% =
1
% =

The incremental % reported from time t to t+12 is then the cumulative reported % at t+12 minus the cumulative
reported % at t.
% , = % − %

The same logic holds for calculating the incremental % paid.

Observations and Common Relationships

Claim development factors will generally be larger for the most recent (most immature) accident years and
smaller for older (more mature) accident years. As a result, IBNR will often be largest in the most recent
accident years. This makes sense because as the accident years mature, more claims will be reported and paid,
moving the IBNR and unpaid loss estimate closer to 0.

Development factors tend to increase as retentions (limits in the underlying data) increase because large claims
tend to be reported later, thus at a higher retentions (limits) larger losses will be observed later in the triangle
resulting in higher claims development factors. As such, retention must be taken into consideration in
estimating ultimate losses and deciding how to segment your data into triangles. Ignoring retentions will lead
to errors in both reserving and ratemaking.

Conclusion

The main assumption of the development method is that claim development patterns to date will persist in a
similar manner in the future (i.e., future development patterns will be like historic development patterns). This
leads to an implicit assumption that claims observed thus far in an accident year predict the claims that are yet
to be observed for that accident year. That is to say the expected future development is entirely dependent on
the selected development factors and the claims to date— if claims reported to date are high, future
development will also be high.

The reported claims development method implicitly assumes there have been no significant changes in case
adequacy levels during the experience period. The paid claims development method implicitly assumes there
are no material changes in the speed of claim closure and payment. If either case adequacy or payment speeds
change, past age-age factors will not hold true in the future. Therefore, the development method is an
appropriate method for insurers in a relatively stable environment with no major organizational changes (e.g.,
new claims processing system, or method of setting case reserves) or external environmental changes (e.g., tort
reform). The development method requires a large volume of historical data and works best when data is not
distorted by large claims. Benchmark claim development patterns are often used when insufficient company
data is available. The development method is particularly suitable for high frequency-low severity lines of
business. If the assumptions of the method do not hold, the actuary should consider alterative techniques (or
at least adjust the selected claim development factors to account for changes).

Another implicit assumption of the development method is that claims are evenly spread throughout the period
because the method implicitly develops losses from the mid-point of one period to the mid-point of the next. If
development factor selections are based on historic periods with evenly spaced claims, but the latest accident
year has a distribution of losses shifted later due to exposure growth, the mid-point assumption will not hold
and the ultimate loss estimate for the latest accident year will be understated. Additional implicit assumptions
of the method include: consistent claim processing, stable mix of claim types, stable policy limits, and stable

Spring 2020 © James Bedford, FCAS Page 52


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

reinsurance retentions. A change in any of these would result in a change in reporting (or payment) patterns,
violating the main assumption.

Advantages:
It is the most basic method, and works well if the book of business is stable, especially for low severity high
frequency lines of business (and when other assumptions hold). The CDFs and claim estimates from the
development method are inputs to many other methods.
Disadvantages:
It will not be accurate if reality is different than the assumptions of the method, which is often the case.
Additionally, long tailed lines of business, such as worker’s compensation and general liability, may have very
large cumulative development factors (especially using paid data). It is not uncommon for cumulative claim
development factors for the most recent periods to be greater than 10.0. This results in the ultimate claim
estimate being very sensitive to the current observed values (because they are multiplied by a very large CDF
to get the ultimate loss estimate). The ultimate loss estimate may change dramatically simply due to the
presence or absence of a single large claim. A relatively small swing in reported (or paid) claims could result in
a very large swing in the projected ultimate claims. When cumulative claim development factors are very large,
they are said to be “highly leveraged”. When cumulative claim development factors are highly leveraged it is
often advisable to use an alternative method for estimating ultimate claim levels (at least for the most immature
years) since the development method estimate will be subject to significant volatility.

Spring 2020 © James Bedford, FCAS Page 53


Estimating Unpaid Claims Using Basic Techniques Chapter 7: Development Technique

Summary of the 7 steps


of the development Accident Paid Claims as of (months)
method- for your review. Year 12 24 36 48 60 72 84 96
2008 18,539 33,231 40,062 43,892 45,897 46,765 47,221 47,447
2009 20,410 36,091 43,259 47,159 49,209 50,162 50,626
2010 22,121 38,976 46,389 50,562 52,735 53,740
2011 22,992 40,096 47,768 52,094 54,363
2012 24,093 41,795 49,904 54,353
2013 24,084 41,400 49,070
2014 24,370 41,490 ,
,
2015 25,101

Accident Age-to-Age Factors


Year 12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult
2008 1.792 1.206 1.096 1.046 1.019 1.010 1.005
41,795 + 41,400 + 41,490 2009 1.768 1.199 1.090 1.043 1.019 1.009
24,093 + 24,084 + 24,370 2010 1.762 1.190 1.090 1.043 1.019
2011 1.744 1.191 1.091 1.044
/
2012 1.735 1.194 1.089 (1.046∗1.043∗1.043∗1.044)

1.744 + 1.735 + 1.719 2013 1.719 1.185 1.191 + 1.194 + 1.185


3 2014 1.703 3

12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult


Simple Average
Latest 5 1.732 1.192 1.091 1.044 1.019 1.009 1.005
Latest 3 1.719 1.190 1.090 1.043 1.019 1.009 1.005
Medial Average
Latest 5x1 1.733 1.192 1.090 1.044 1.019 1.009 1.005
Volume-weighted Average
Latest 5 1.732 1.192 1.091 1.044 1.019 1.009 1.005
Latest 3 1.719 1.190 1.090 1.043 1.019 1.009 1.005
Geometric Average
Latest 4 1.725 1.190 1.090 1.044 1.019 1.009 1.005

12 - 24 24 - 36 36 - 48 48 - 60 60-72 72-84 84-96 96-To Ult


Selected 1.732 1.192 1.091 1.044 1.019 1.009 1.005 1.025
CDF to Ult. 2.490 1.438 1.207 1.106 1.060 1.040 1.030 1.025

Latest diagonal of the


1.009*1.030 1.005*1.025
paid claims triangle.
Paid Projected
Accident Claims CDF Ultimate
Year Age @12/31/15 to Ult Claims
(1) (2) (3) (4) (5)=(3)*(4)

2008 96 47,447 1.025 48,633


2009 84 50,626 1.030 52,139
2010 72 53,740 1.040 55,872
2011 60 54,363 1.060 57,601
2012 48 54,353 1.106 60,115
2013 36 49,070 1.207 59,208
2014 24 41,490 1.438 59,661
2015 12 25,101 2.490 62,505

Total 376,190 455,734

Spring 2020 © James Bedford, FCAS Page 54


Estimating Unpaid Claims Using Basic Techniques Chapter 8: Expected Claims Technique

Estimating Unpaid Claims Using Basic Techniques

Chapter 8 – Expected Claims Technique


Introduction

The expected claims technique ignores claims to date completely and sets the ultimate claim estimate equal to
an a-priori (initial) estimate. This a-priori considers exposure levels and assumes that claims vary in
proportion to exposure. For example, assume based on ratemaking assumptions we anticipate the accident
year 2016 claims ratio will be 70% (e.g., PY 2015 is priced to a 72% claims ratio, PY 2016 is priced to 68%,
policies are all annual, and there is no growth). At the end of the year we need to estimate unpaid claims for AY
2016 (in addition to all prior years). We will have 12 months’ worth of reported and paid claims data, but
suppose we chose to ignore it—we could instead estimate AY 2016 ultimate claims as the product of the 70%
expected claims ratio and the premium earned during the period, say $5,000,000. We can then find the unpaid
claims by subtracting paid claims from our $3,500,000 ultimate claim estimate. Similarly, we can find IBNR by
subtracting reported claims from our ultimate claims estimate of $3,500,000. This is a simple example of the
expected claims technique—two alternative implementations are covered below (including the mechanics
used in the text exhibits and most exam problems). This is the exact opposite of the development technique
whose ultimate claim estimate is based entirely on experience to date. The method assumes actual claims to
date do not provide information about future development or their ultimate value.

The text gives the following three examples of ways an actuary can set the a-priori ultimate claim estimate:

1) Multiply the ratemaking expected claims ratio (ECR) by earned premium in the period—as covered in
the opening example.
2) Estimate expected claims using a complex simulation model requiring extensive expert input. (No
specifics are needed for the exam, this is just given as an example)
3) Estimate the claims ratios for each AY by developing, trending, and adjusting actual claims from each
other AY for benefit level changes, then dividing by earned premium adjusted to the target years rate
level. This is the method used in most of the examples in the text and tested the most.

The expected claims estimate is most commonly used in the following situations:

1) An insurer enters a new line of business or a new territory, therefore data in unavailable for other
methods.
2) Historical data becomes irrelevant for projecting future claims (often due to operational or
environmental changes; e.g., the statute of limitations for filing a claim has been increased or coverage
has been expanded).
3) When the line of business being evaluated is very long tailed and the development method is highly
leveraged (high CDFs).
4) As an input to the Bornhuetter-Ferguson technique (Chapter 9).

Mechanics for Estimating an A-Priori Ultimate Loss

Though a ratemaking ECR or the results from a sophisticated simulation model may be used in practice
(methods 1 & 2 above) the text examples and most exam problems use the actual experience of other AYs
contained in the triangle to develop an ECR for each accident year separately.

To estimate the ECR for a target accident year (e.g., 2016) we can use experience from all available years
(excluding 2016), after adjusting each other year to the claim and premium level of the target year (2016). In
the following example, we will estimate the 2016 ECR and a-priori ultimate claim estimate (an a-priori ultimate
loss estimate will need to be estimated separately for each year, but we will focus on the latest year for now).
The first input we need is an initial ultimate claim estimate. The text uses an average of the ultimate claim
estimates from the paid and reported development method (from Chapter 7). For this problem, assume the
development technique is already complete and the average of the paid and reported ultimate loss estimates is
given in Column (2).

Spring 2020 © James Bedford, FCAS Page 55


Estimating Unpaid Claims Using Basic Techniques Chapter 8: Expected Claims Technique

To use claim information from other years to estimate 2016 we must first adjust claims for trend and tort
reform differences. We use a trend of 3.50% and the tort reform adjustment factors given in Column (4). The
initial ultimate claims are multiplied by the trend and tort reform adjustment factors to adjust the claims from
each AY at the AY 2016 level in Column (5). Similarly, we adjust earned premium in Column (6) by a rate level
adjustment factor (RLAF) in Column (7) to adjust premium from each year to the 2016 level in Column (8).
Now the claims in Column (5) and premiums in Column (8) from each year have been adjusted to the 2016
level, and we can calculate the adjusted claims ratio at a 2016 level in Column (9). We select 67.3%, the average
of AYs 2010-2015, as the 2016 ECR. Our a-priori AY 2016 ultimate claim estimate is then 67.3% times the 2016
(calendar) accident year earned premium of $110,263, or $72,207.

Initial Trend at Adj. Trended On-Level Trended


Accident Ultimate 3.50% for Tort Adj. Ultimate Earned Rate Level Earned Adjusted
Year Claims to 7/1/16 Reform Claims Premium Adj. Factor Premium Claim Ratio
(1) (2) (3) (4) (5) (6) (7) (8) (9)
= (2)*(3)*(4) = (6)*(7) = (5)/(8)
2010 48,953 1.229 0.670 40,317 61,183 0.914 55,921 72.1%
2011 47,404 1.188 0.670 37,722 69,175 0.870 60,182 62.7%
2012 77,662 1.148 0.670 59,709 99,322 0.905 89,886 66.4%
2013 78,496 1.109 0.670 58,310 108,151 0.750 81,113 71.9%
2014 65,227 1.071 0.750 52,405 107,578 0.750 80,684 65.0%
2015 62,970 1.035 1.000 65,174 109,874 0.900 98,887 65.9%
2016 61,267 1.000 1.000 61,267 110,263 1.000 110,263 55.6%

(10) Selected 2016 ECR 67.3%


(11) 2016 Earned Premium 110,263
(12) = (10)*(11) AY 2016 A-Priori Estimate 74,207

I have crossed out the AY 2016 estimate at the 2016 level because we should not use a year to predict itself—
this isn’t clear in the text, but the CAS does take away points if you include the target year’s claim ratio in the
average for selection purposes.

If the data needed to adjust earned premiums is unavailable, earned premiums would be at historic levels, so
we would also want to leave claims at historic levels (i.e., no trend or tort adjustment). Actuarial judgment in
making the ECR selection in such a scenario becomes more important (see Exhibit II, Sheet 1 of the exhibits).

Now let’s estimate the AY 2014 a-priori ultimate claim estimate using the same data. The mechanics are
identical, except now we adjust claims and premiums to the 2014 level. A “shortcut” to recalculating trend,
tort, and on-level factors at a 2014 level, given the 2016 factors, is to realize they are relativities and reset the
base to 2014 by dividing each factor in the column by the 2014 value of that factor. For example, the AY 2016
trend factor adjustment of 0.934 below (in the 2014 example) is simply 1.000/1.071 (from the 2016 example).
In this example, claims in Column (5), on-level premiums in Column (8), and adjusted claims ratios in Column
(9) are all at the 2014 level. Again, we want to exclude the target years claim ratio from our selection— in this
case we select the average of AYs 2010-2016 excluding 2014, or 61.4%. The AY 2014 a-priori ultimate claim
estimate is then the selected ECR (61.4%) times the 2014 earned premium ($107,578), or $66,053.

Spring 2020 © James Bedford, FCAS Page 56


Estimating Unpaid Claims Using Basic Techniques Chapter 8: Expected Claims Technique

Initial Trend at Adj. Trended On-Level Trended


Accident Ultimate 3.50% for Tort Adj. Ultimate Earned Rate Level Earned Adjusted
Year Claims to 7/1/14 Reform Claims Premium Adj. Factor Premium Claim Ratio
(1) (2) (3) (4) (5) (6) (7) (8) (9)
= (2)*(3)*(4) = (6)*(7) = (5)/(8)
2010 48,953 1.148 0.893 50,182 61,183 1.219 74,562 67.3%
2011 47,404 1.109 0.893 46,951 69,175 1.160 80,243 58.5%
2012 77,662 1.071 0.893 74,319 99,322 1.207 119,849 62.0%
2013 78,496 1.035 0.893 72,578 108,151 1.000 108,151 67.1%
2014 65,227 1.000 1.000 65,227 107,578 1.000 107,578 60.6%
2015 62,970 0.966 1.333 81,121 109,874 1.200 131,849 61.5%
2016 61,267 0.934 1.333 76,257 110,263 1.333 147,017 51.9%

(10) Selected 2014 ECR 61.4%


(11) 2014 Earned Premium 107,578
(12) = (10)*(11) AY 2014 A-Priori Estimate 66,053

This process would be repeated for each AY of interest—older AYs are often selected as a group. On the exam,
you would want to calculate the expected claims method estimate as above, but in practice is common to
adjust everything to the level of the latest year, make a selection, then adjust that selection back to the level of
each preceding year. It is important to notice this nuance as students that are used to doing it differently in
practice often overlook this point.

Though we would theoretically want to also trend premium, the text never mentions this or includes it in any
examples. As such, you are unlikely to need to trend premiums in an expected claims method question on the
exam, though there has been a question in the past asking about adjustments you should make to premium
and trending was one of the expected answers.

Self-insured organizations do not collect premium, so we must use another exposure base— one that closely
relates to the risk (potential for claims), is readily observable, and available. Examples include payroll for
workers’ compensation, miles driven for auto liability, or property value for property insurance. When using
an alternative exposure base, we will estimate the pure premium per unit of exposure rather than a claims
ratio. The pure premium per exposure is then multiplied by the exposure of the target year to estimate the a-
priori ultimate claims.

In the following example, we will estimate the a-priori ultimate loss estimate for AY 2016 for a company that
self-insurers their workers’ compensation exposure. The mechanics are very similar to the prior examples,
except for the difference in exposure used. In the case of workers’ compensation, the best and most
commonly used exposure base (when premium is unavailable) is payroll. Since payroll is inflation sensitive
we need to trend it to the year being evaluated; 2016 in this case. If the exposure base being used is not
inflation sensitive (e.g., miles driven or number of employees) then no exposure trend is needed (but we still
need to trend claims).

Spring 2020 © James Bedford, FCAS Page 57


Estimating Unpaid Claims Using Basic Techniques Chapter 8: Expected Claims Technique

Initial Trend at Adj. Trended Trend at Trended


Accident Ultimate 3.00% for Tort Adj. Ultimate Payroll 2.50% Payroll Adjusted
Year Claims to 7/1/16 Reform Claims (00's) to 7/1/16 (00's) Pure Premium
(1) (2) (3) (4) (5) (6) (7) (8) (9)
= (2)*(3)*(4) = (6)*(7) = (5)/(8)
2010 68,710 1.194 0.750 61,532 16,211 1.160 18,799 3.273
2011 69,648 1.159 0.750 60,556 17,064 1.131 19,306 3.137
2012 72,759 1.126 0.750 61,418 17,962 1.104 19,827 3.098
2013 62,131 1.093 1.000 67,892 18,907 1.077 20,361 3.334
2014 65,546 1.061 1.000 69,538 19,902 1.051 20,910 3.326
2015 67,878 1.030 1.000 69,914 20,950 1.025 21,474 3.256
2016 68,267 1.000 1.000 68,267 22,053 1.000 22,053 3.096

(10) Selected 2016 Claim Cost 3.237


(11) 2016 Exposure (Payroll) 22,053
(12) = (10)*(11) AY 2016 A-Priori Estimate 71,384

Conclusion

The method assumes actual claims to date provide no information about future development and ultimate
claims.

Optimally, we will trend claims and adjust them for any tort (or benefit level) adjustments, and adjust
premiums for differences in rate levels (on-level). If using an alternative exposure base, we want to trend it if
it is inflation sensitive, but do not want to trend it if it is not inflations sensitive. It is important to select trends
based on the segment being evaluated (e.g., line of business, state(s), and years). Claim trends can vary wildly
by line of business, state, and years evaluated.

It is slightly confusing because the text repeatedly states the expected claims technique is independent of actual
claims, but actual claims are used (after adjustments) when setting the a-priori this way. To reconcile this,
remember we only use actual claims from other AYs and not actual claims from the target AY to estimate the a-
priori ultimate claims to the target AY, so each target year is independent of its actual claims to date.

The advantage of the expected claims method is stability. Since actual claims (of the given AY) do no enter into
the calculations the estimate does not change unless the exposure or ECR assumptions are changed. This
stability, or lack of responsiveness, is also the methods disadvantage—it does not respond if actual claims
experience differs from the initial expectations. The actuary can counteract this disadvantage by ensuring the
exposure and ECR assumptions are up to date and accurate; in such a situation, the expected claims method
can actually be more responsive than data dependent methods (because the actuary is reflecting changes
before they are fully manifested in the data).

Whether the expected claims technique is accurate or not depends on the ability of the actuary to obtain an
accurate exposure base and accurately estimate expected claim relative to that exposure base for each year,
especially if there have been changes to the companies operating environment (e.g., tort reform, operational,
environmental). The actuary may be able to turn to industry benchmarks to assist in selecting claim ratios, pure
premiums, and claim development factors if data is unavailable or sparse (see Chapter 3 for considerations
when using industry data).

Be sure to review the supplementary Excel exhibits and comments as they expand upon the ideas covered in the
manual.

Spring 2020 © James Bedford, FCAS Page 58


Estimating Unpaid Claims Using Basic Techniques Chapter 9: Bornhuetter-Ferguson Technique

Estimating Unpaid Claims Using Basic Techniques

Chapter 9 – Bornhuetter-Ferguson Technique


Introduction

The expected claims technique covered in Chapter 8 sets the ultimate claim estimate equal to an a-priori
estimate (which is proportional to exposures), regardless if actual reported (or paid) claims come in better or
worse than expected. Remembering ultimate claims can be broken down to Reported Claims + IBNR (or Paid
Claims + Unpaid Claims) the BF method instead sets IBNR (or unpaid claims) equal to an expected amount, but
adds actual reported (or paid) to estimate ultimate claims.

The BF method essentially gives credit for the top half of the triangle, but sets the bottom half of the triangle
equal to expected claims. By contrast, the development method uses the top half to predict the bottom half,
giving it full credit. While the expected claims method sets ultimate claims independently from the top half,
giving it no credit.

In the BF method, the expected amount of IBNR (or unpaid claims) is equal to the a-priori ultimate loss estimate
(from the expected claims method) times the percentage unreported (or unpaid). From Chapter 7 we
know % Unreported = 1 − and % Unpaid = 1 − , so after calculating the a-priori ultimate
claim estimate and cumulative development factors the BF method ultimate calculation is very
straightforward—we simply add our IBNR (or unpaid) estimate to actual reported (or paid) claims.

B-F Mechanics

The BF method can be used with reported claims (reported BF method) or paid claims (paid BF method)—the
exam may ask for one or the other, but in practice both will be calculated (assuming the data is available). Note,
the a-priori ultimate loss estimate will be the same for both the reported and paid BF methods, and is equal to
the expected claims method estimate.

Another way to think of the BF method is as a credibility weighted average of the development technique and
the expected claims techniques ultimate claim estimates. Examining the reported BF method, we can see:

. .= ∗ . . +(1 − ) ∗ .

If we set the credibility given to the development method equal to the expected percentage reported to date,
then =% = and (1 − ) = % = 1− . While remembering
. .= ∗ then the left-hand side simplifies to the reported
claims and right-hand side becomes the percent unreported time the expected claims ultimate estimate:

. .= % ∗ . . +% ∗ .

1
. .= ∗ ∗ +% ∗ .

.= +% ∗ .

The resulting formula is exactly the formula explained in the introduction. The equations for the paid BF
method are identical, but we use paid claims, paid CDFs, and percentage unpaid.

Below is a step by step example using reported claims (again, for the paid BF method substitute paid claims,
paid CDFs, and percentage unpaid and the mechanics are identical). The reported development method
ultimate estimate is developed in Col (4), assuming the age-age factors have already been selected. We calculate
the percentage unreported (the credibility given to the expected claims technique) in Col (5). The a-priori
expected ultimate claims are given in Col (6)—these would be estimated as done in Chapter 8. The reported BF

Spring 2020 © James Bedford, FCAS Page 59


Estimating Unpaid Claims Using Basic Techniques Chapter 9: Bornhuetter-Ferguson Technique

IBNR estimate in Col (7) is then the expected percentage unreported times the a-priori expected ultimate
claims. The reported BF ultimate claim estimates in Col (8) are then the reported claims to date plus the
expected percentage unreported times expected ultimate claims (reported claims + IBNR).

Alternatively, using the formulas above, we could have calculated the reported BF ultimate estimate, using the
credibility formula, as the percentage reported times the reported development ultimate estimate plus the
percentage unreported times the expected ultimate claims—but given all the information that is a longer
calculation. I suppose they could just give you the reported development ultimate estimates on the exam
[Column (4) only rather than (2) and (3)], in which case you could use the long formula, or use the percentage
reported to back into reported claims and apply the standard formula (They would have to give you (5) or (3)
to do the problem, and given one we can find the other).

Reported Reported Reported Expected Bornhuetter- Bornhuetter-


Accident Claims CDF to Development % Ultimate Ferguson Ferguson
Year @ 12/31/16 Ultimate Method Ult. Unreported Claims IBNR Estimate Ult. Estimate
(1) (2) (3) (4) (5) (6) (7) (8)
=(2)*(3) =1-1/(3) =(5)*(6) =(2)+(5)*(6)
2006 318,282 1.000 318,282 0.0% 316,673 0 318,282
2007 341,238 1.000 341,238 0.0% 336,405 0 341,238
2008 355,586 1.000 355,586 0.0% 323,584 0 355,586
2009 365,330 1.001 365,696 0.1% 354,590 354 365,685
2010 390,618 1.003 391,790 0.3% 379,210 1,134 391,752
2011 373,769 1.006 376,012 0.6% 359,541 2,144 375,913
2012 379,844 1.011 384,023 1.1% 377,331 4,105 383,950
2013 378,576 1.023 387,283 2.2% 394,991 8,881 387,457
2014 364,276 1.051 382,854 4.9% 386,624 18,761 383,037
2015 325,690 1.110 361,516 9.9% 396,114 39,255 364,945
2016 319,024 1.292 412,179 22.6% 416,622 94,159 413,183
Total 3,912,234 4,076,459 4,041,683 168,793 4,081,028

Since the credibility given to the development technique is = =% (1.0-Col (5) above),
the weight given to the development technique will increase for more mature years, resulting in more weight
going to the estimate based on actual experience, which is desirable.

There are situations where CDFs can be < 1.0— most commonly when examining reported claims, and often
due to:

1) Salvage or subrogation.
2) Conservative case reserves (which will decease to the true amount over time).

If the CDF is < 1.0 then the credibility is greater than 1.0 ( ≥ 1.0) which is theoretically incorrect. The text
recommends limiting CDFs to 1.0, for the purposes of this method, so that 0 ≤ ≤ 1.0. In practice, many
actuaries will allow CDFs to be <1.0 and ≥ 1.0 in the BF method. On the exam, I would limit the CDFs to 1.0 to
keep ≤ 1.0 as that is recommended by the text and done in the examples.

Bornhuetter -Ferguson Summary

The primary assumption of the reported BF method is that unreported claims will emerge in accordance with
expected claims. Unreported claims are based entirely on expected claims, so the method assumes claims
reported to date contain no information that can be used in predicting unreported claims (opposed to the
expected claims method assumption that reported claims to date contain no information that can be used to
predict ultimate claims). These are the opposite of the development method, who’s primary assumption is that
unreported claims are fully predicted by claims reported to date (unreported claims = [CDF-1.0] x reported
claims). Again, the same holds true for the paid BF method and unpaid claims.

Spring 2020 © James Bedford, FCAS Page 60


Estimating Unpaid Claims Using Basic Techniques Chapter 9: Bornhuetter-Ferguson Technique

The advantage of the BF method is that it is more responsive to actual claims than the expected claims method
(whose ultimate estimate ignores actual claims) and more stable than the development method (whose
ultimate estimate is completely based on actual claims).

Benktander Technique (Iterative BF)

The BF IBNR estimate is entirely based on an expected amount and completely ignores actual experience. The
Benktander method’s IBNR estimate blends actual and expected experience. It accomplishes this by re-running
the BF method, but using the BF ultimate estimate from the first run as the a-priori ultimate claims in place of
the expected claims method in the second iteration. The Benktander method is primarily used with reported
claims, but can also be used with paid claims. My examples will focus on reported claims, IBNR, and percentage
unreported, but also hold for paid claims, unpaid claims, and percentages paid.

. = + % .

The following example shows the reported Benktander calculation. Columns (1) – (8) are identical to the BF
example covered earlier. In Column (9), we calculate the Benktander IBNR estimate as the expected percentage
unreported times the BF Ultimate estimate. The reported Benktander ultimate claims estimate in Col (10) are
then the reported claims to date plus the expected percentage unreported times the BF ultimate estimate
(reported + IBNR). As you can see, the mechanics are identical to the BF method, but we instead use the BF
ultimate estimate as the a-priori rather than the expected claims method estimate in the 2nd iteration.

Reported Reported Reported Expected Bornhuetter- Bornhuetter-


Accident Claims CDF to Development % Ultimate Ferguson Ferguson Benktander Benktander
Year @ 12/31/16 Ultimate Method Ult. Unreported Claims IBNR Estimate Ult. Estimate IBNR Estimate Ult. Estimate
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
=(2)*(3) =1-1/(3) =(5)*(6) =(2)+(5)*(6) =(5)*(8) =(2)+(5)*(8)
2006 318,282 1.000 318,282 0.0% 316,673 0 318,282 0 318,282
2007 341,238 1.000 341,238 0.0% 336,405 0 341,238 0 341,238
2008 355,586 1.000 355,586 0.0% 323,584 0 355,586 0 355,586
2009 365,330 1.001 365,696 0.1% 354,590 354 365,685 365 365,696
2010 390,618 1.003 391,790 0.3% 379,210 1,134 391,752 1,172 391,790
2011 373,769 1.006 376,012 0.6% 359,541 2,144 375,913 2,242 376,011
2012 379,844 1.011 384,023 1.1% 377,331 4,105 383,950 4,177 384,022
2013 378,576 1.023 387,283 2.2% 394,991 8,881 387,457 8,711 387,287
2014 364,276 1.051 382,854 4.9% 386,624 18,761 383,037 18,587 382,863
2015 325,690 1.110 361,516 9.9% 396,114 39,255 364,945 36,166 361,856
2016 319,024 1.292 412,179 22.6% 416,622 94,159 413,183 93,382 412,406
Total 3,912,234 4,076,459 4,041,683 168,793 4,081,028 164,802 4,077,037

The advantage of the Benktander method is that its IBNR estimate is a blend of actual and expected claims (the
BF ultimate estimate times percentage unreported), the Benktander estimate is more responsive than the BF
method (whose IBNR estimate ignores actual claims) and more stable than development method (whose IBNR
estimate is completely based on actual claims). Ranking the methods in terms of responsiveness to the actual
data:

Development Technique > Benktander > Bornhuetter-Ferguson > Expected Claims Method

The rankings of stability will be just the opposite of the above ranking.

It is worth noting the Benktander method will approach the development technique estimate after sufficient
iterations are performed (i.e., re-run the Benktander using the ultimate estimate from the first Benktander as
the a-priori, and so on) since for each iteration the IBNR estimate will give more weight to the actual claims.

Spring 2020 © James Bedford, FCAS Page 61


Estimating Unpaid Claims Using Basic Techniques Chapters 7-9 Wrap Up

Estimating Unpaid Claims Using Basic Techniques

Chapters 7-9 – Wrap Up Discussion


The development method, expected claims method, and Bornhuetter-Ferguson method are the most basic and
universally used methods—you will not see a standard reserve review without these methods. As such it is
crucial to understand the assumptions and accuracy of each. Now that we have reviewed each I want to
compare and contrast the methods.

Review

The development method uses the age-age relationships from the top half of the triangle to predict the bottom
half:

The expected claims method is set independently from the top half of the triangle:

The BF method ultimate estimate accounts for actual claims in the top half of the triangle, but predicts the
bottom half of the triangle based on the a-priori ultimate claims (often from the expected claims technique):

In his review of the original BF paper, Hugh White raises the following hypothetical question: You are trying to
estimate unpaid claims and the reported portion of expected claims are higher than expected—should you:
1) Reduce IBNR by a corresponding amount because you believe the higher reported claims are a result
of accelerated reporting or case strengthening and you don’t expect ultimate claims to change?
2) Leave the IBNR unchanged because you believe the increase is due to random fluctuations, so ultimate
claims will increase by the additional amount reported, but IBNR is unaffected?
3) Increase the IBNR in proportion to the increase in reported claims because we believe the observed
data is more predictive of future claims than our a-priori?

(The question also holds when reviewing paid claims—replace reported claims with paid claims, and IBNR
with unpaid)

Spring 2020 © James Bedford, FCAS Page 63


Estimating Unpaid Claims Using Basic Techniques Chapters 7-9 Wrap Up

Answer #1 is the expected claims method, answer #2 is the BF method, and answer #3 is the development
method. None of the three answers are satisfactory without further examining the reason for the increase
in reported claims first. You would need to talk to other departments and examine diagnostics to try and
determine which case is most appropriate in the given situation (i.e., faster reporting/case strengthening
leads to #1, random fluctuation leads to #2, or being indicative of the future leads to #3).

Next I want to examine an example illustrating the effects of claims coming in higher or lower than expected
on each method. In the following example, we expect ultimate claims of $3.0M with 1/3 of the claims
reported in the first year (i.e., CDF = 3.0)—represented by the top bar. The scenario on the left shows what
happens to the estimate of each method if actual clams come in lower than expected, while the right
scenario shows each when claims come in higher than expected.

$0 $ 1.0M $ 2.0M $ 3.0M $0 $ 1.0M $ 2.0M $ 3.0M $ 4.0M

Expe c te d C la im s Expe c te d Expe c te d


R e po rte d to R e po rte d Expe c te d IB NR $ 2.0M R e po rte d Expe c te d IB NR $ 2.0M
Da te $ 1.0M $ 1.0M

Ac tua l
Expe c te d C la im s Ac tua l R e po rte d
R e po rte d IB NR $ 2.25M IB NR $ 1.75M
M e tho d $ 1.25M
$ 0.75M

Ac tua l
De ve lo pm e nt Ac tua l R e po rte d
R e po rte d IB NR $ 1.5M IB NR $ 2.5M
M e tho d $ 1.25M
$ 0.75M

Ac tua l
Ac tua l R e po rte d
B F M e tho d R e po rte d Es tim a te d IB NR $ 2.0M Es tim a te d IB NR $ 2.0M
$ 1.25M
$ 0.75M

The ultimate claims estimate under the expected claims method is unchanged regardless of experience to
date—the methods IBNR estimate changes to keep the ultimate estimate equal to the initial ultimate estimate.

The development methods ultimate estimate is the most sensitive to actual claims to date since its ultimate
estimate is actual claims times the fixed CDF.

The BF methods sensitivity to actual claims falls between the other two methods. The BF ultimate estimate
reflects actual claims to date, but its IBNR estimate is unaffected by higher or lower actual claims. In this
example, the BF IBNR estimate will always be $2.0M based on expected claims ([1.0-1.0/3.0] x $3M).

Accuracy Under Changing Conditions

Reported claims could be higher (lower) than expected due to either increased (decreased) case adequacy or
an increase (decrease) in claims ratios. Similarly, paid claims could be higher (lower) than expected due to
either faster (slower) claims settlement or an increase (decrease) in claims ratios.

Here we will examine the accuracy of each method under the following scenarios:

1) Steady State (all of the methods provide an accurate estimate in the steady state)
2) Change in Claims Ratios
3) Change in Case Adequacy
4) Change in Payment Speed
5) Change in Mix of Business (between a short tailed and long tailed line with different claims ratios)

Spring 2020 © James Bedford, FCAS Page 64


Estimating Unpaid Claims Using Basic Techniques Chapters 7-9 Wrap Up

Development Techniques
The development techniques (paid and reported) are not effected by changing claims ratios—they are only
effected by differences in development. If claims are increasing, the ultimate claims estimate will reflect this
and be accurate as long as development patterns are unchanged (because claims to date will be higher,
resulting in a higher ultimate).
If case adequacy increases the reported development estimate will increase, but ultimate claims do not increase
due to increased case adequacy (reported claims just shift earlier), so the estimate will be overstated. Similarly,
if payment speed increases the paid development estimate will increase, but ultimate claims do not increase
solely due to faster payments (paid claims just shift earlier), so the estimate will be overstated.
The reported development method is unaffected by changes in payment speeds (as far as reported claims are
concerned it is just a shift between case and paid claims), and the paid development method is unaffected by
changes in case adequacy (case is not included in paid claims), so each produces an accurate estimate in those
scenarios. If there is a shift in mix between line with different development patterns, the resulting estimate will
be inaccurate because future development will be different than historic development.
Expected Claims Method
The expected claims method will not be accurate if claims ratios change from those assumed in the method (the
actuary can resolve this by adjusting the selected claims ratios). Changes in case adequacy or payment speed
only effect the development of claims, not their ultimate values, so the expected claims method proves accurate.
If the mix of business is shifting between two line with different claims ratios the expected claims method will
be inaccurate unless the actuary adjusts the expected claims ratios to reflect these shifts.
BF Techniques
Since the BF method uses the expected claims method as its a-priori claim estimate, it will also prove inaccurate
under the changing claims ratios and mix of business scenarios, unless the expected claims ratio is adjusted.
Under the changing case adequacy scenario, the reported BF IBNR estimate will be unchanged, but the ultimate
claims estimate will increase (due to the higher case reserves ->reported claims). The expected ultimate does
not increase due to increased case adequacy (reported claims just shift earlier), so the estimate is overstated.
Similarly, under the changing payment speed scenario, the paid BF unpaid estimate will be unchanged, but the
ultimate claims estimate will increase (due to the higher payments), but the expected ultimate does not
increase solely due to increased payment speeds (paid claims just shift earlier), so the estimate is overstated.
The reported BF is unaffected by changes in payment speeds (as far as reported claims are concerned it is just
a shift between case and paid claims), and the paid BF is unaffected by changes in case adequacy (case reserves
are not included in paid claims), so each method produces an accurate estimate in those scenarios.
The following table summarizes the accuracy of each method under each scenario:

Steady Change in Change in Change in Change in


State Claims Ratios Case Adequacy Payment Speed Mix of Business
Rep. Development Method Accurate Accurate Inaccurate Accurate Inaccurate

Paid Development Method Accurate Accurate Accurate Inaccurate Inaccurate

Inaccurate unless Inaccurate unless


Expected Claims Method Accurate Accurate Accurate
a-priori adjusted a-priori adjusted
Inaccurate unless Inaccurate unless
Reported BF Method Accurate Inaccurate Accurate
a-priori adjusted a-priori adjusted
Inaccurate unless Inaccurate unless
Paid BF Method Accurate Accurate Inaccurate
a-priori adjusted a-priori adjusted

Spring 2020 © James Bedford, FCAS Page 65


Estimating Unpaid Claims Using Basic Techniques Chapter 10: Cape Cod Technique

Estimating Unpaid Claims Using Basic Techniques

Chapter 10 – Cape Cod Technique


Introduction

The Cape Cod (CC) method (a.k.a. Stanard-Buhlmann) is very similar to the BF method, with the difference
being the way we estimate the a-priori ultimate claims. The BF method a-priori uses the result of the expected
claims method, as estimated in Chapter 8. The Cape Cod technique uses a mechanical, formulaic estimate. As
such, the BF a-priori incorporates actuarial judgment where the CC a-priori does not. The text focuses on the
CC using reported claims, but it can also be used on paid claims. The CC is most frequently used for reinsurance
reserving but is also commonly used in primary insurance and for all lines of business.

Rather than developing claims to ultimate levels then selecting an a-priori claims ratio like we do in the
expected claims method; the CC instead backs off the premium to the level of the claims expected to be reported
to date and calculates a claims ratio (CR). For example, if the latest AY is at 12 months of maturity and we expect
25% of claims to be reported (i.e., a CDF of 4.00) we multiply the earned premium of the latest AY by 25% to
get the “used up” premium. The actual reported claims will be divided by the used-up premium to find the
expected claims ratio (ECR) for that year.

The CC method calculates used up premium by year, but rather than calculating ECRs by year and making a
selection as we do in the expected claims method, the method formulaically sets the overall ECR equal to the
sum of reported claims over all years divided by the sum of the used-up premium over all years. Exhibits may
show the claims ratio by year, but it is not needed because the selected ECR is always the total reported claims
divided by total used-up premium.

As in the expected claims method, when the data is available we want to trend and adjust claims for tort reform,
and on-level the premiums (to the level of the latest year, not the prospective period as we will in ratemaking).
The overall selected ECR will be calculated at this adjusted level, but the ECR applied to each year’s earned
premiums will be unadjusted back to the level of each year.

Cape Cod Mechanics

Now we will explore the mechanics of the method, as discussed above, through an example. We will revisit the
example presented in Chapter 8, but perform the CC method rather than expected claims method. In this
example we are given rate level adjustment factors, claims trend and tort reform adjustment factors, so we will
use both on-level used up premium and adjusted claims at their current levels (before later backing them off
to their historic levels after calculating the ECR in total at current levels).

On-Level Reported % of Used Up


Earned Rate Level Earned CDF to Ultimate On-Level
Premium Adj. Factor Premium Ultimate Reported Premium
Accident (1) (2) (3) (4) (5) (6)
Year = (1)*(2) =1.0/(4) =(3)*(5)
2010 61,183 0.914 55,921 1.003 99.7% 55,754
2011 69,175 0.870 60,182 1.013 98.7% 59,410
2012 99,322 0.905 89,886 1.064 94.0% 84,480
2013 108,151 0.750 81,113 1.085 92.2% 74,759
2014 107,578 0.750 80,684 1.196 83.6% 67,461
2015 109,874 0.900 98,887 1.513 66.1% 65,358
2016 110,263 1.000 110,263 2.552 39.2% 43,207
Total 665,546 576,936 450,428

The on-level used-up premium corresponds to the percentage of claims expected to be reported through the
valuation date. The sum of the on-level used up premium will be the denominator in our ECR calculation.

Spring 2020 © James Bedford, FCAS Page 67


Estimating Unpaid Claims Using Basic Techniques Chapter 10: Cape Cod Technique

Here we trend and adjust reported claims to current levels in Col (10). The expected claims ratio is the sum of
adjusted claims in Col (10) divided by the sum on-level used-up premium in Col (6), which is 60.4% in our
example. The values shown by year in Col (11) are really informational only since we only use the ratio of the
totals. At this point, the expected claims ratio of 60.4% is at the AY 2016 level since we trended and adjusted
claims and on-leveled the premiums—we need to back off the expected claims ratio to the level of each year.
This is done by multiplying the ECR by the rate level adjustment factor (RLAF) and dividing by the Trend and
the tort reform adjustment factor (a.k.a benefit level adjustment factor, of BLAF) because:
∗ ∗
= therefore,

∗ ∗
= ∗ because ∗ =
∗ ∗ ∗

Reported Trend at Adj. Adjusted Estimated Selected Selected


Claims 3.50% for Tort Reported Adjusted Adjusted Unadjusted
@12/31/16 to 7/1/16 Reform Claims Claims Ratio Claims Ratio Claims Ratio
Accident (7) (8) (9) (10) (11) (12) (13)
Year =(7)*(8)*(9) =(10)/(6) =(11) Total =(12)*(2)/[(8)*(9)]
2010 48,169 1.229 0.670 39,672 71.2% 60.4% 67.1%
2011 44,373 1.188 0.670 35,310 59.4% 60.4% 66.1%
2012 70,288 1.148 0.670 54,040 64.0% 60.4% 71.1%
2013 70,655 1.109 0.670 52,485 70.2% 60.4% 61.0%
2014 48,804 1.071 0.750 39,210 58.1% 60.4% 56.4%
2015 31,732 1.035 1.000 32,843 50.3% 60.4% 52.5%
2016 18,632 1.000 1.000 18,632 43.1% 60.4% 60.4%
Total 332,653 272,192 60.4%

Now that we have unadjusted claims ratios for each year, we calculate the expected claims (the a-priori) as the
earned premium times the unadjusted claims ratio, as done in Col (3) below. Now we have expected claims by
year— the mechanics to finish the CC method are identical to the BF method:

.= +% ∗

Selected Reported Expected Reported Projected


Earned Unadjusted Expected CDF to Percentage Unreported Claims at Ultimate
Premium Claims Ratio Claims Ultimate Unreported Claims @12/31/16 Claims
Accident (1) (2) (3) (4) (5) (6) (7) (8)
Year =(1)*(2) =1-1/(4) =(3)*(5) =(6)+(7)
2010 61,183 67.1% 41,031 1.003 0.3% 123 48,169 48,292
2011 69,175 66.1% 45,703 1.013 1.3% 587 44,373 44,960
2012 99,322 71.1% 70,649 1.064 6.0% 4,250 70,288 74,538
2013 108,151 61.0% 65,985 1.085 7.8% 5,169 70,655 75,824
2014 107,578 56.4% 60,687 1.196 16.4% 9,945 48,804 58,749
2015 109,874 52.5% 57,736 1.513 33.9% 19,576 31,732 51,308
2016 110,263 60.4% 66,632 2.552 60.8% 40,522 18,632 59,154
Total 665,546 408,422 80,172 332,653 412,825

Please review the Excel exhibits accompanying this chapter to see an example of the calculation when rate level
history is unavailable (Example 1, US Industry Auto)—it is a simpler version of the above example since each
factor is 1.0 and can be ignored.

Spring 2020 © James Bedford, FCAS Page 68


Estimating Unpaid Claims Using Basic Techniques Chapter 10: Cape Cod Technique

Conclusion

The primary assumption of the Cape Cod method is that unreported claims will develop based on expected
claims, as calculated using reported claims and used-up premiums. The main advantage of the method is that
it uses reported claims in the calculation of the ECR, therefore it will respond (at least partially) to changes in
claims ratios. I say partially because the ECR calculation uses all years, so if the CR increases in recent years,
these higher CR years will be included in the average, but so will older years prior to the change, so the resulting
ECR won’t fully respond to changes.

The main disadvantage of the method is it is highly dependent on the availability and accuracy of the rate level
adjustment factor. If we don’t have access to rate level history we can still perform the method without
adjusting premium (and claims) to the current level, similar to how we did in the expected claims method, but
the accuracy of our estimate will decrease. If data is thin, the ECR set by formula in the CC method may be
unreliable due to volatility. In such a case, the BF method would be more appropriate because we can
incorporate actuarial judgment into the selection of the a-priori.

Spring 2020 © James Bedford, FCAS Page 69


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Estimating Unpaid Claims Using Basic Techniques

Chapter 11 – Frequency-Severity Techniques


Introduction

All of the methods we have examined thus far evaluate claims in aggregate. This chapter will cover several
techniques that evaluate the frequency and severity of claims separately, providing insights into the drivers of
claims development. Frequency-severity techniques are used for all lines of insurance, but most commonly
used for long tailed lines of business. Frequency is technically claim counts per exposure, but frequency-
severity methods will also encompass methods that use claim counts and severities without incorporating an
exposure base (like approach #1 below). There are many techniques that actuaries use which fall under the
classification of frequency-severity techniques— the text covers three of them.

Approach #1: Development of Claim Counts and Severity

The first frequency-severity method is rather simple—we develop claim counts and severities to ultimate
separately, then multiply them together to estimate ultimate claims. As we are using the development
technique, the method assumes claim counts and severities will continue to develop as they have in the past.

Two requirements for frequency-severity methods are that claims are defined in a consistent manner over the
experience period, and the type of claims included in the segment being analyzed are reasonably homogenous.
To the first point, claim counts can either be on a claimant basis (one count per claimant, even if from the same
occurrence) or occurrence basis (one count per occurrence, regardless of the number of claimants). The actuary
must understand which definition of claim count is used in the data, and that the definition is used consistently
throughout the triangle (unless the mix of the two definitions is consistent throughout). The types of claims in
the segment of business being analyzed should also be reasonably homogenous—combining average values
from very low and very high severity loss types would result in averages with little meaning (as an extreme
example: slip and falls being grouped with class action lawsuits).

Additional considerations include the handling of claims closed with no payments (CNP)—the actuary must
know if CNP counts are included or excluded in the data. This method develops both closed and reported claims
separately, so each must be on the same basis regarding CNP counts to produce comparable results. Further, if
claims include ALAE we need to also include counts with expense payments only (or else there will be a
mismatch between number of claims and dollars spend on the claims).

The example for approach #1 in the text uses six-month periods to illustrate that unpaid claim estimates are
often done using data at valuations other than in yearly intervals and to illustrate the potential influence of
seasonality. I will cover that discussion here as well. Therefore, this example serves two purposes: to illustrate
frequency-severity approach #1 and to demonstrate the effects of seasonality.

We begin approach #1 by developing an ultimate claim count estimate. We do this by developing both closed
and reported claims to ultimate, separately, then make an ultimate count selection from the two estimates.
Note, this example excludes CNP counts, so we are not surprised to observe downward development of
reported claim counts in the following triangle (because a claim may be counted initially, but may close without
payment causing them to drop out of the reported claim counts).

Spring 2020 © James Bedford, FCAS Page 71


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Table 1.1

Accident Closed Claim Counts as of (months) Accident Reported Claim Counts as of (months)
Half - Year 6 12 18 24 30 Half - Year 6 12 18 24 30
2014-1 2,261 2,671 2,694 2,696 2,697 2014-1 2,808 2,712 2,704 2,701 2,700
2014-2 1,949 2,637 2,659 2,662 2014-2 2,799 2,675 2,670 2,668
2015-1 2,059 2,496 2,520 2015-1 2,578 2,533 2,529
2015-2 2,083 2,732 2015-2 2,791 2,778
2016-1 2,533 2016-1 3,139

Accident Age-to-Age Factors Accident Age-to-Age Factors


Half - Year 6 - 12 12 - 18 18 - 24 24 - 30 To Ult Half - Year 6 - 12 12 - 18 18 - 24 24 - 30 To Ult
2014-1 1.181 1.009 1.001 1.000 2014-1 0.966 0.997 0.999 1.000
2014-2 1.353 1.008 1.001 2014-2 0.956 0.998 0.999
2015-1 1.212 1.010 2015-1 0.983 0.998
2015-2 1.312 2015-2 0.995

Simple Simple
Average 6 - 12 12 - 18 18 - 24 24 - 30 To Ult Average 6 - 12 12 - 18 18 - 24 24 - 30 To Ult
Latest 3 1.292 1.009 1.001 1.000 1.000 Latest 3 0.978 0.998 0.999 1.000 1.000
Selected 1.292 1.009 1.001 1.000 1.000 Selected 0.978 0.998 0.999 1.000 1.000
CDF to Ult. 1.305 1.010 1.001 1.000 1.000 CDF to Ult. 0.975 0.997 0.999 1.000 1.000

Now that we have selected CDFs we develop the ultimate claim estimates based on the two methods in Col (7)
& (8) of Table 1.3 on the following page. The estimates are rather close in all periods except 2016-1, so we
select the average of the two methods in those periods. To determine what is driving the difference in the
2016-1 estimates we return to the claim triangles to see if there is something we overlooked. We begin by
looking at the ratios of reported to closed claim counts, which shows us there is clearly seasonal effects at 6
months of development. We observe the ratio in the second half of the year is consistently higher than the first
half, as seen using conditional formatting.

Table 1.2
Reported Severities as of (mon
Accident Ratio of Closed to Reported Claim Counts
Half - Year 6 12 18 24 30
2014-1 1.242 1.015 1.004 1.002 1.001
2014-2 1.436 1.014 1.004 1.002
2015-1 1.252 1.015 1.004
2015-2 1.340 1.017
2016-1 1.239

To determine whether the effects are being driven by close or reported claims we return to the development
triangles and age-age factors in Table 1.1. Upon second inspection, we observe similar seasonality effects in the
close claim count age-age factors at 6 months, but not in the reported claim count age-age factors. Due to the
seasonality effects being present in the closed claim count estimate, we select the reported claim count
development estimate for 2016-1 (alternatively we could have chosen a 6-12 closed factor for 2016-1 based
on the age-age factors from the first half of prior years—doing so in this example would result in a similar
estimate as the reported development).

Spring 2020 © James Bedford, FCAS Page 72


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Table 1.3
Age of Acc. Claim Counts Proj. Ult. Claim Counts Selected
Accident Half-Year at 6/30/16 CDF to Ultimate Using Dev Method with Ult. Claim
Half-Year at 6/30/16 Closed Reported Closed Reported Closed Reported Counts
(1) (2) (3) (4) (5) (6) (7) (8) (9)
2014-1 30 2,697 2,700 1.000 1.000 2,697 2,700 2,699
2014-2 24 2,662 2,668 1.000 1.000 2,662 2,668 2,665
2015-1 18 2,520 2,529 1.001 0.999 2,523 2,526 2,524
2015-2 12 2,732 2,778 1.010 0.997 2,759 2,770 2,764
2016-1 6 2,533 3,139 1.305 0.975 3,306 3,061 3,061

Column Notes:
(7) = [(3) x (5)].
(8) = [(4) x (6)].
(9) = Average of (7) and (8) for all accident half-years other than 2016-1. For 2016-1, (9) = (8).

Now we have selected ultimate claim count estimates for each accident year. Next, we need to estimate ultimate
severities (reported claims/reported claim counts) for each accident year. The text develops the reported
severities to ultimate but we could also use paid severities (or both and make a selection as we do in method
#2). The mechanics are identical to other development method estimates.

Table 1.4
Reported Severities as of (months)
Accident (Reported Claims / Reported Claim Counts)
Half - Year 6 12 18 24 30
2014-1 4,254 4,372 4,371 4,359 4,356
2014-2 4,467 4,771 4,759 4,759
2015-1 4,524 4,549 4,544
2015-2 4,531 4,627
2016-1 4,483

Accident Age-to-Age Factors


Half - Year 6 - 12 12 - 18 18 - 24 24 - 30 To Ult
2014-1 1.028 1.000 0.997 0.999
2014-2 1.068 0.997 1.000
2015-1 1.006 0.999
2015-2 1.021

Simple
Average 6 - 12 12 - 18 18 - 24 24 - 30 To Ult
Latest 3 1.032 0.999 0.999 0.999 1.000
Selected 1.032 0.999 0.999 0.999 1.000
CDF to Ult. 1.029 0.997 0.998 0.999 1.000

Based on the CDFs selected above, we develop an estimate of ultimate severities by accident year below in Col
(5). The selected ultimate claim counts from Table 1.3 are carried forward to Col (6). The ultimate claim
estimate is then the product of the selected severities and claim counts.

Table 1.5 Projected Ultimate


Age of Acc. Reported Using Frequency-Severity Method
Accident Half-Year Severities CDF to Claim Ult. Claims
Half-Year at 6/30/16 at 6/30/16 Ultimate Severities Counts ($000s)
(1) (2) (3) (4) (5) (6) (7)
2014-1 30 4,356 1.000 4,356 2,699 11,755
2014-2 24 4,759 0.999 4,754 2,665 12,670
2015-1 18 4,544 0.998 4,535 2,524 11,448
2015-2 12 4,627 0.997 4,613 2,764 12,753
2016-1 6 4,483 1.029 4,613 3,061 14,118
Total 128,413

Column Notes:
(5) = [(3) x (4)].
(6) Developed in prior Exhibit.
(7) = [(5) x (6) / 1000].

Spring 2020 © James Bedford, FCAS Page 73


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Approach #2: Incorporating Exposures and Inflation into the Method

The development technique often results in highly leveraged development factors in the most recent years,
leading to higher uncertainty in our estimates. The second approach focuses on projecting ultimate claims for
the two most recent accident years by adjusting claims and exposures from prior years to current levels, and
making selections. Approach #1 is very similar to the claims development method from Chapter 7 while
approach #2 is very similar to the expected claims method from Chapter 8, with the main difference being we
are estimating frequency (or counts) and severity separately.

Like we do in the expected claims method, we want to exclude the target year(s), so in this example we use AYs
2014 and prior to estimate AYs 2015 and 2016. The first step is to select ultimate claim counts by accident
year—this is identical to the first step of approach #1. In this example, we’ll assume we have already completed
the first step and our selected ultimate claim counts are shown in Col (2) of Table 2.1. Next, we want to adjust
these claim counts to the level of the target year (2016)—we select a trend rate of -1.0% (based on industry
trends). The trend factors are shown in Col (3) while the trended ultimate counts are in Col (4).

This approach uses a true frequency, unlike approach #1 which just uses counts. This example is for a self-
insured company and the exposure base is payroll, shown in Col (5). We also want the payroll to be at current
levels, so we trend them at 2.5% to calculate the trended payrolls in Col (7). If using premium as the exposure
base, we would want to on-level it rather than trend it, if the data is available.

Col (8) divides the trended counts by trended payroll, resulting in trended frequencies (at a 2016 level) across
each accident year. There is a noticeable drop in trended frequencies in AYs 2014-2016. We should ask
management if anything has changed to cause this drop (e.g., new cost containment programs, change in TPA,
type of work performed, or definition of claims). In this example the drop is due to a new safety manager, so
we want to reflect this drop in our selection for 2016. We then make our 2015 selection by backing off the 2016
selection by one year of payroll and claim count trend. We do this by multiplying the 2016 frequency selection
by (1+payroll trend)/(1+count trend) since frequency = counts/payroll and we need to divide counts and
payroll by (1+trend) to back them off one year— since payroll in in the denominator its trend factor ends the
numerator after simplifying the equation.

Table 2.1
Selected Trend to Trended Trend to Trended Trended
Accident Ultimate 2016 at Ultimate Payroll 2016 at Payroll Ultimate
Year Counts -1.00% Counts ($000s) 2.50% ($000s) Frequency
(1) (2) (3) (4) (5) (6) (7) (8)
2009 1,774 0.932 1,653 280,000 1.189 332,832 0.50%
2010 1,749 0.941 1,646 280,000 1.160 324,714 0.51%
2011 1,651 0.951 1,570 350,000 1.131 395,993 0.40%
2012 2,982 0.961 2,866 790,000 1.104 872,012 0.33%
2013 2,909 0.970 2,822 780,000 1.077 839,975 0.34%
2014 2,658 0.980 2,605 740,000 1.051 777,463 0.34%

Column and Line Notes: (9) Selected frequency at 2016 level 0.34%
(3) = (1 - .01)^(2016 - AY) (10) Selected frequency at 2015 level 0.35%
(4) = (2) x (3).
(5) = (1 + .025)^(2016 - AY)
(7) = (5) x (6).
(8) = (4) / (7).
(9) Judgmentally selected.
(10) = (9) x [1 + (annual payroll trend of 2.50%)] / [1 + (annual claim count trend of -1.00%)].

Spring 2020 © James Bedford, FCAS Page 74


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Next, we need to estimate ultimate severities by year. Again, we use AY 2014 and prior to estimate AYs 2015
and 2016. In approach #1 we did this by developing reported severities—in this example we also develop paid
severities (paid claims/closed claim counts) to ultimate and make our ultimate selection by reviewing the two
results. The results of the two development techniques are given in Col (2) and (3) of Table 2.2—we select the
average of the two results in Col (4).

We then want to trend the selected severities to the level of the target year (2016) like we did with frequencies.
Based on industry trends, we select a 7.5% trend rate. We then take various averages of the trended severities
in Row (7). We make a selection for the severity in the most recent year, then trend that selection back to the
prior year— on the exam you would look at the values in Col (6) and judgmentally select which years to include
in your selected average, as calculating several would be a waste of time.

Table 2.2 Projected Ultimate


Severities Using Trend to Trended
Accident Dev. Method with Selected 2016 at Ultimate
Year Paid Reported Severity 7.50% Severity
(1) (2) (3) (4) (5) (6)
2009 4,946 4,980 4,963 1.659 8,234
2010 5,117 5,366 5,241 1.543 8,089
2011 5,540 5,730 5,635 1.436 8,090
2012 6,165 6,174 6,170 1.335 8,239
2013 6,554 6,658 6,606 1.242 8,207
2014 7,212 7,094 7,153 1.156 8,266

(7) Avg. Trended Severity at 2016 Cost Level


(a) All Years 8,187
(b) All Years Excl. High and Low 8,192
(b) Latest 3 Years 8,237
(8) Selected 2016 Severity 8,192
Column Notes: (9) Estimated 2015 Severity 7,621
(5) = (1 + .075)^(2016 - AY)
(6) = (4) x (5).
(8) Judgmentally selected.
(9) = (8) / [1 + (annual severity trend of 7.50%)].

At this point we have selected frequencies and severities for the two most recent accident years, 2015 and
2016. Using these selections, along with the given exposure bases (payrolls) we can project ultimate claims in
each year as exposures x frequency x severity, shown in Col (6). We can also calculate the projected ultimate
claim counts as exposures x frequency, shown in Col (4).

Table 2.3
Projected Projected
Accident Payroll Selected Ultimate Selected Ultimate
Year ($000s) Frequency Counts Severity Claims
(1) (2) (3) (4) (5) (6)
2015 750,000 0.34% 2,550 8,192 20,889,600
2016 760,000 0.35% 2,660 7,621 20,271,860

Column Notes:
(4) = (2) x (3).
(6) = (4) x (5) = (2) x (3) x (5).

When selecting trends, there are many factors for the actuary to consider. Trends in frequency and severity not
only reflect economic inflation, but also societal factors, mix of business, and technological advances (e.g., active
crash avoidance on vehicles reducing frequencies). Trends can vary significantly by line of business, geographic
region, and limits of coverage. Actuaries can evaluate the insurers own data, general insurance industry data,
government statistics, and economic indices to assist in making informed trend selections. All trend selections

Spring 2020 © James Bedford, FCAS Page 75


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

in this example are given and based on industry trend rates. Trend selection will be covered in much greater
detail in the ratemaking material.

Approach #3: Disposal Rate Technique

Just like the first two approaches, approach #3 begins by developing closed and reported claim counts to
ultimate and making an ultimate claim count selection. I am not going to re-do those steps here, but instead
assume we have already done this, and the results are shown in the box in Table 3.1.

This method uses disposal rates to project incremental closed claims. The disposal rate is closed claim counts
divided by ultimate claim counts. This is essentially the percentage of claims closed, or “disposed” of.

= /

In Table 3.2 we calculate the disposal rates, take various averages, and make selections of disposal rates at each
age.

Table 3.1 Selected


Accident Closed Claim Counts as of (months) Ultimate
Year 12 24 36 48 60 72 Counts
2011 106 294 383 453 499 542 626
2012 126 281 377 445 494 629
2013 114 249 315 403 588
2014 114 229 300 553
2015 79 188 438
2016 127 609

Table 3.2
Accident Disposal Rate as of (months)
Year 12 24 36 48 60 72
2011 0.17 0.47 0.61 0.72 0.80 0.87
2012 0.20 0.45 0.60 0.71 0.79
2013 0.19 0.42 0.54 0.69
2014 0.21 0.41 0.54
2015 0.18 0.43
2016 0.21
Table 3.1/Selected Ultimate Counts

Simple Avg 12 24 36 48 60 72 To Ult


Latest 3 0.20 0.42 0.56 0.71 0.79 0.87
Latest 5 0.20 0.44 0.57 0.71 0.79 0.87
Selected 0.20 0.44 0.57 0.71 0.79 0.87 1.00

In Table 3.3 we use the selected disposal rates, closed claim counts, and selected ultimate claim counts to
project the incremental closed claim counts in the bottom half of the triangle. The formula to find these
incremental closed claim counts is a bit intimidating, but can be understood intuitively with a little effort (the
disposal rates referenced in the formula are the selected disposal rates, not actual disposal rates).

. − .
= . − .
1− . .

The numerator of the second half of the equation ( . − . ) is the expected percentage of
claims that will close in the given timeframe as a percentage of ultimate. The dominator of the second half of
the equation is the expected percentage of “un-closed” claims, so put together, the second half represents the
expected percentage of “un-closed” claims that will close in the next period (note: we use the selected disposal
rate at the age of the latest diagonal, not the actual disposal rate on the latest diagonal). The first half of the
equation is the number of “un-closed” claims, so multiplying the two results in the number of claims expected
to close in the given timeframe. I use the term “un-closed” because this number includes both open claims and
claims that will close in the future but are yet to be reported (i.e., Ultimate Counts – Closed Counts).

Spring 2020 © James Bedford, FCAS Page 76


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Table 3.3
Accident Projected Incremental Closed Claim Counts (months)
Year 12 24 36 48 60 72 To Ult
2011 106 188 89 70 46 43 84
2012 126 155 96 68 49 51 84
2013 114 135 66 88 51 51 83
2014 114 115 71 82 47 47 76
2015 79 109 58 63 36 36 58
2016 127 145 78 84 48 48 78

For example, the value of incremental closed claim projection of 63 in AY 2015 at age 48 is derived as

. 71 − .57
(438 − 188)
1 − .44
where 438 is the AY 2015 ultimate claim estimate, 188 is the number of cumulative closed claims on the latest
diagonal, .44 is the disposal rate at the age of the latest diagonal of AY 2015, .71 is the disposal rate at age 48,
and .57 is the disposal rate at age 36. Therefore .71-.57 is the percentage of ultimate claims expected to close
between 36 and 48 months, and 1-.44 is the percentage of “un-closed” claims as of the latest actual observation
(age 24 for AY 2015). (.71-.57)/(1-.44) is then the percentage of “un-closed” claims as of the latest observation
expected to close between 36 and 48 months, and (438-188) is the number of un-closed claims as of the latest
observation, therefore their product is the expected number of claims to close between 36 and 48 months.

Now that we have populated to bottom half of the triangle with incremental closed claim counts, we want to do
the same with incremental severities, so we can multiply to two together to find expected incremental
payments for the bottom half of the triangle. Table 3.4 shows incremental paid claims and Table 3.5 divides the
incremental paid claims by the incremental closed counts (from Table 3.3) to find incremental paid severities.

We then run exponential regressions down the columns to measure the trends and R2 (you would be given
these on the exam). From this information, and knowledge of industry trends, we select a severity trend of
5.0%.

Table 3.4
Accident Incremental Paid Claims as of (months)
Year 12 24 36 48 60 72
2011 984,748 5,144,209 4,341,801 4,133,926 7,331,963 2,005,852
2012 1,158,659 4,652,513 4,686,332 4,589,912 3,155,154
2013 1,198,767 3,905,070 3,938,297 6,401,795
2014 1,220,778 3,373,968 4,389,118
2015 796,774 3,436,867
2016 1,445,365

Table 3.5
Accident Incremental Paid Severities as of (months)
Year 12 24 36 48 60 72
2011 9,290 27,363 48,784 59,056 159,391 46,648
2012 9,196 30,016 48,816 67,499 64,391
2013 10,516 28,926 59,671 72,748
2014 10,709 29,339 61,819
2015 10,086 31,531
2016 11,381
Table 3.4/Table 3.3

Trend based on Exponential Regression


All Years 3.8% 2.6% 9.5% 11.0% -59.6% 0.0%
Latest 3 3.1% 4.4% 12.5% 11.0% -59.6% 0.0%
Goodness of Fit Test of Exponential Regression Analysis (R-Squared)
All Years 70.4% 63.2% 85.6% 97.4% 100.0% 100.0%
Latest 3 25.4% 86.9% 85.9% 97.4% 100.0% 100.0%

Selected Trend: 5%

Spring 2020 © James Bedford, FCAS Page 77


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

We want incremental severities, at the cost level of each AY. To do this we first trend incremental paid severities
to the most recent year, make a selection at each age, then trend the selections back to the appropriate level for
each year in the bottom half of the triangle.

The trended average incremental paid claims (severities) in Table 3.6 are very steady through age 48, so we
are comfortable making our selection based on these values. The values at ages 60+ are less stable, so we
perform an ancillary calculation to the side for these more mature ages. (The larger example in the text is more
volatile than this example, and the volatility starts earlier, so it examines more years to make their mature
year’s selections)

For ages 60+ we calculate the trended severities for claims in that group of ages, rather than at each individual
age, resulting in a more stable estimate. The 60+ trended severity is sum of trended incremental payments at
age 60 and beyond, in Table 3.6c, divided by the sum of incremental closed claims at age 60 and beyond in Table
3.6a. These values are shown in Rows (1) - (3) in the summary table below Table 3.6c. This estimated trended
tail severity is brought back into the selected row of Table 3.6—remember these selections are all at the 2016
level (since we trended all values to this level).

What we want is severities in the bottom half of the triangle at the loss levels of each given accident year. We
find this by bringing down the selected severities to the 2016 year, then trending them back at 5.0% to each
year, as shown in Table 3.7.
Table 3.6 Trended Average Incremental Paid Claims
Accident =Incr. Paid Severities x (1+trend %)^(2016 - AY)
Year 12 24 36 48 60 72
2011 11,857 34,923 62,262 75,372 203,427 59,536
2012 11,177 36,485 59,336 82,045 78,268
2013 12,173 33,486 69,077 84,215
2014 11,806 32,346 68,155
2015 10,590 33,107
2016 11,381
Table 3.5 Trended to 2016

Simple Avg 12 24 36 48 60 72 84+


Latest 3 11,259 32,980 65,523 80,544 140,847 59,536
Latest 5 11,426 34,069 64,708 80,544 140,847 59,536
Selected 11,259 32,980 65,523 80,544 114,151 114,151 114,151
Calculated in Tables 3.6a-c
Table 3.7
Accident Incr. Paid Sev. Adj. to Cost Level of AY Assuming 5% Annual Trend Rate
Year 12 24 36 48 60 72 84+
2011 9,290 27,363 48,784 59,056 159,391 46,648 89,440
2012 9,196 30,016 48,816 67,499 64,391 93,912 93,912
2013 10,516 28,926 59,671 72,748 98,608 98,608 98,608
2014 10,709 29,339 61,819 73,056 103,538 103,538 103,538
2015 10,086 31,531 62,403 76,709 108,715 108,715 108,715
2016 11,381 32,980 65,523 80,544 114,151 114,151 114,151
Top Half from Table 3.5
Bottom Half = Selected Severity @2016 Level, De-trended

Spring 2020 © James Bedford, FCAS Page 78


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Table 3.6a Table 3.6c Trended Incremental Paid Claims


Accident Accident =Table 3.6b x (1+trend %)^(2016 - AY)
Year 60 72 Year 60 72
2011 46 43 2011 9,357,649 2,560,032
2012 49 2012 3,835,109
Derived from Table 3.1 Age 60
& Older
Table 3.6b (1) Total Closed Claims: 138
Accident (2) Tot. Trended Paid Claims: 15,752,791
Year 60 72 (3) Est. Tail Severity: 114,151
2011 7,331,963 2,005,852
2012 3,155,154 Row Notes:
From Table 3.4 (1) = Sum of Table 3.6a
(2) = Sum of Table 3.6c
(3) = (2) / (1)

We now have incremental closed claim counts and severities for all points in the bottom half of the triangle. We
can multiply these counts and severities together to find the projected incremental claims, as done in Table 3.8.
We then sum up the incremental payments to find cumulative paid claims in Table 3.9, with the rightmost
column representing the ultimate claim estimates for approach #3.

Table 3.8 Projected Incremental Paid Claims


Accident =Projected Closed Claim Counts x Projected Incremental Paid Severities
Year 12 24 36 48 60 72 84+
2011 984,748 5,144,209 4,341,801 4,133,926 7,331,963 2,005,852 7,512,962
2012 1,158,659 4,652,513 4,686,332 4,589,912 3,155,154 4,829,761 7,848,362
2013 1,198,767 3,905,070 3,938,297 6,401,795 5,032,389 5,032,389 8,177,633
2014 1,220,778 3,373,968 4,389,118 6,017,753 4,873,510 4,873,510 7,919,454
2015 796,774 3,436,867 3,621,574 4,794,282 3,882,675 3,882,675 6,309,347
2016 1,445,365 4,768,884 5,132,060 6,793,881 5,502,062 5,502,062 8,940,850
Table 3.3 x Table 3.7

Table 3.9
Accident Projected Cumulative Paid Claims
Year 12 24 36 48 60 72 84+
2011 984,748 6,128,957 10,470,758 14,604,684 21,936,647 23,942,499 31,455,461
2012 1,158,659 5,811,172 10,497,504 15,087,416 18,242,570 23,072,331 30,920,694
2013 1,198,767 5,103,837 9,042,134 15,443,929 20,476,318 25,508,708 33,686,340
2014 1,220,778 4,594,746 8,983,864 15,001,617 19,875,127 24,748,637 32,668,091
2015 796,774 4,233,641 7,855,215 12,649,497 16,532,173 20,414,848 26,724,195
2016 1,445,365 6,214,249 11,346,309 18,140,190 23,642,252 29,144,313 38,085,163
Sum of Table 3.8 through each age

Spring 2020 © James Bedford, FCAS Page 79


Estimating Unpaid Claims Using Basic Techniques Chapter 11: Frequency-Severity Techniques

Conclusion

We can gain valuable information on the claims process by evaluating frequencies and severities separately.
Frequency-severity methods can be performed using paid and reported claims, or paid claims alone if we are
concerned about the accuracy of case reserves. The most common reasons actuaries may not use these methods
are unavailability of data, changes in claim count definitions (affects counts and severities), or changes in claims
processing (affects development).

An additional advantage of these methods is the ability to explicitly reflect inflation in the projection rather
than simply assuming that past development will properly account for future inflation. The ability to reflect
inflation can also be a weakness as the method is highly sensitive to the trend selection.

Again, the frequency-severity methods assume:

1) Claim counts and severities will continue develop as they have in the past.
2) Claim counts are defined consistently over time (or at minimum, the mix is consistent).
3) Claim types are reasonably homogenous (or at minimum, the mix between non-homogenous claim
types is consistent).

These methods also implicitly assume there are no significant partial payments, as those would skew the
severities prior to the claim closing.

Claim count development is very stable for many lines of business, resulting in a reliable estimate of ultimate
counts (and frequencies). Severities can also be reliably estimated, especially for older accident years. We can
trend these reliable estimates from older accident years to help in selecting severities for the recent accident
years.

The first example in this chapter included a discussion of seasonality. When using periods shorter than annual,
in this and all other methods, it is important to consider the influences seasonality may have on the results.

The estimated ultimate claims from a frequency-severity method may be used as the a-priori expected claim in
the Bornhuetter-Ferguson technique. This may be more appropriate in cases where we are more confident in
our frequency-severity estimate than our expected claims method result.

Spring 2020 © James Bedford, FCAS Page 80


Estimating Unpaid Claims Using Basic Techniques Chapter 12: Case Outstanding Development

Estimating Unpaid Claims Using Basic Techniques

Chapter 12 – Case Outstanding Development Technique


Introduction

This chapter presents two case outstanding development methods. The first incorporates the historical
relationships between paid claims and case outstanding to estimate unpaid claims, while the second is useful
in situations where the only data available is case outstanding reserves (a possibility for self-insurers).

Method #1 Mechanics

Method #1 uses the development of case to prior case reserves and incremental paid to prior case reserves to
estimate ultimate claims. It is logical that future case reserves and incremental payments in a given period are
related to the case reserves in the prior period. We use this fact to project future case reserves and incremental
payments out to ultimate, then sum up the incremental payments to find the cumulative ultimate claims.

The first step is to calculate and select the ratio of case to prior case reserves by age— this is shown in Step 1
below. We also need to calculate and select the ratio of incremental payments to previous case reserves by
age— this is shown in Step 2. We then use the ratios selected in Step 1 to develop the case outstanding amounts
at each age in the bottom half of the case outstanding triangle, shown in Step 3. Next, we use the selected
incremental payments to prior case ratios, along with the projected case amounts from Step 3 to develop the
incremental future payments at each age—this is shown in Step 4. The cumulative paid claims, shown in Step
5, are simply the sum of incremental paid claims at each maturity from Step 4, with the cumulative paid claims
at ultimate being this method’s ultimate claim estimate (highlighted in peach below).
Accident Case Outstanding as of (months): Accident Incremental Paid Claims as of (months):
Year 12 24 36 48 60 Year 12 24 36 48 60
2012 7,602 4,050 2,452 1,709 981 2012 8,777 4,605 2,233 1,075 887
2013 7,725 4,348 2,905 1,852 2013 9,348 4,732 2,582 1,230
2014 8,514 4,533 3,017 2014 10,253 5,553 2,440
2015 7,627 4,297 2015 10,053 4,947
, ,
2016 7,777 2016 10,424
, ,

Ratio of Case Outstanding to Ratio of Incremental Paid Claims to


Accident Previous Case Outstanding as of (months): Accident Previous Case Outstanding as of (months):
Year 12 24 36 48 60 To Ult Year 12 24 36 48 60 To Ult
2012 0.533 0.606 0.697 0.574 2012 0.606 0.551 0.438 0.519
2013 0.563 0.668 0.638 2013 0.613 0.594 0.423
2014 0.532 0.666 2014 0.652 0.538
2015 0.563 2015 0.649
Simple Average 0.548 0.646 0.667 0.574 Simple Average 0.630 0.561 0.431 0.519
Selected 0.548 0.646 0.667 0.574 0.000 Selected 0.630 0.561 0.431 0.519 1.100

Accident Case Outstanding as of (months): Accident Incremental Paid Claims as of (months):


Year 12 24 36 48 60 To Ult Year 12 24 36 48 60 To Ult
2012 7,602 4,050 2,452 1,709 981 0 2012 8,777 4,605 2,233 1,075 887 1,079
2013 7,725 4,348 2,905 1,852 1,063 0 2013 9,348 4,732 2,582 1,230 962 1,170
2014 8,514 4,533 3,017 2,013 1,156 0 2014 10,253 5,553 2,440 1,300 1,045 1,271
2015 7,627 4,297 2,778 1,854 1,064 0 2015 10,053 4,947 2,412 1,197 962 1,170
2016 7,777 4,261 2,754 1,838 1,055 0 2016 10,424 4,898 2,391 1,187 954 1,160
=Prior Case x Selected Case to Prior Case ratio =Prior Case from Step 3 x Selected Incr. Paid to Prior Case ratio.

Accident Cumulative Paid Claims as of (months)


Year 12 24 36 48 60 Ultimate
2012 8,777 13,382 15,615 16,690 17,577 18,657
2013 9,348 14,080 16,662 17,892 18,853 20,023
2014 10,253 15,807 18,247 19,546 20,591 21,863
2015 10,053 15,000 17,411 18,608 19,570 20,741
2016 10,424 15,322 17,713 18,900 19,854 21,014

Spring 2020 © James Bedford, FCAS Page 81


Estimating Unpaid Claims Using Basic Techniques Chapter 12: Case Outstanding Development

In Step 1 we selected a tail factor of 0.00, implying there are no case reserves after the latest age in the triangle.
That will be the default selection in this method, so ensure your triangle is large enough that this is a valid
assumption. If we had selected a tail factor greater than 0.00, there would be case reserves in the “To Ult”
column of Step 3, but that does not lead anywhere since it never becomes the prior case reserves in the next
step (the Step 2 tail is applied to the case outstanding at the latest age, not ultimate). In Step 2 the selected tail
factor represents the percentage of case outstanding at the latest age that is ultimately paid out. If we expect
more than the case outstanding at the last age to ultimately be paid out, we would select a factor greater than
1.0, as we do here (a selection less than one implies claims settle for less than the case reserves at the latest
age).

The key assumption of this method is similar to other development techniques—that future claims are related
consistently to claims already reported. This method is most stable/appropriate when most claims are
reported in the first year, so that it can more accurately measure the incremental paid to prior case ratio. If
there are significant new reports in future periods, this ratio will not be as steady due to more moving pieces.
This is because future payments include those included in prior case, but also payments on newly reported
claims. This is a weakness if we are evaluating most lines of business on an AY basis, but is a strength when
analyzing claims made policies grouped into report year triangles. An additional weakness is there are no
industry benchmarks to compare to our selected ratios by age, and these selections are not necessarily intuitive
or something the actuary would have gained knowledge of through general experience. As a result, this method
is not commonly used by actuaries.

Method #2 Mechanics

The text presents a method to develop an unpaid claims estimate if the only pieces of information we have
available is current case reserves and industry paid and reported CDFs (i.e., we don’t know total paid, total
reported, or have historical valuations). This situation is possible, though not necessarily common, for self-
insurers, particularly in older years following mergers and acquisitions. Unlike other methods that first develop
an estimate of ultimate claims (before subtracting reported and paid claims to get IBNR and unpaid claims,
respectively), this method results in an estimate of unpaid claims directly.

The CDFs referenced in the equation below are cumulative, and the formula inside the parenthesis is called the
case outstanding development factor. This equation is applied separately to each accident year.

( − 1.00)
= + 1.0

The algebra needed to understand this formula intuitively is messy. Instead I would recommend you learn the
following more intuitive and mathematically equivalent formula which is also more commonly used in practice
and is also accepted by the CAS (see Fall 2016, Q19 Part a, Sample Answer 2).

1 − 1/
=
1/ . − 1/

Which is more intuitive to me since it is equal to:


%
=
% −%

Where case reserves divided by % Case is a “pseudo” ultimate estimate, which is then multiplied by % Unpaid
to estimate unpaid claims.

The advantage of this method is we are able to develop an estimate of unpaid claims when the only piece of
information we have is case outstanding (and industry CDFs).

There are many weaknesses to this second method. Since this method is used when historical data and
company CDFs are not available, we must use industry benchmarks which may prove to be inaccurate for the

Spring 2020 © James Bedford, FCAS Page 82


Estimating Unpaid Claims Using Basic Techniques Chapter 12: Case Outstanding Development

specific company. It may also not be a good estimate for more recent years, if CDFs are highly leveraged.
Additionally, any individual large losses contained in the case reserves may distort the projection.

The following example applies this method across several accident years.

Industry Industry Case O/S Unpaid


Accident Case Reported Paid Development Claim
Year Outstanding CDF to Ult. CDF to Ult. Factor Estimate
(1) (2) (3) (4) (5) (6)
2011 715,000 1.015 1.046 1.506 1,076,790
2012 775,000 1.020 1.067 1.454 1,126,850
2013 850,000 1.030 1.109 1.421 1,207,850
2014 915,000 1.051 1.187 1.445 1,322,175
2015 975,000 1.077 1.306 1.439 1,403,025
2016 995,000 1.131 1.489 1.545 1,537,275
Total 4,775,000 7,011,175

Column Notes:
(5) = [1-1/(4)]/(1/(3)-1/(4)]
(6) = (2) x (5)

Spring 2020 © James Bedford, FCAS Page 83


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

Estimating Unpaid Claims Using Basic Techniques

Chapter 13 – Berquist-Sherman Techniques


Introduction

Berquist and Sherman presented solutions for estimating unpaid claims when insurers have undergone
changes in operations and procedures, or the environment has changed in such a way the claim triangles would
violate the assumptions of the methods presented earlier.

Berquist and Sherman present two options:

1) Treat problem areas through data selection and rearrangement.


2) Treat problem areas through data adjustments.
a. Adjust reported claims for changes in case outstanding adequacy levels.
b. Adjust paid claims for changes in claim settlement rates.
c. Adjust reported claims for both changes in case outstanding adequacy and claim settlement
rates.

Data Selection and Rearrangement

The preferred method is to treat the problem through data selection or rearrangement. If this proves to be an
insufficient solution, we will then use the Berquist-Sherman adjustments discussed in the next section.

Whenever possible the actuary should use data that is relatively unaffected by changes in the insurer’s claims
procedures and operations. For example, if the insurer changes their case methodology, the actuary may place
greater reliance on methods using paid claims data (because paid claims are unaffected by case reserves).
Berquist and Sherman give the following examples of alternatives to use in responding to a change in
environment:
1) If claim count data is questionable, or if there has been a change in the definition of claim counts,
consider using earned exposure in place of claims counts.
2) If there has been a significant change in policy limits or deductibles between successive Policy Years,
consider using Policy Year triangles rather than Accident Year triangles.
E.g., if PY 2012 had a limit of $1M and PY 2013 has a limit of $5M then AY 2013 would be a mixture
of claims with a $1M and $5M limit.
3) If there has been a significant change in the social or legal climate causing severity to correlate more
highly with report date than accident date, consider using Report Year triangles rather than Accident
Year triangles.
E.g., if a court ruling doubles maximum awards effective January 1, 2014, accidents occurring
before that date could vary greatly depending on when they are reported or settled.
4) If the insurer is growing rapidly consider using Accident Quarter data rather than Accident Year data
to minimize the distortions in the CDFs due to significant shifts in the average accident date within
each exposure period.

If there is a change that results in a previously homogenous reserving group now having differences we can
explore splitting the data further to retain homogeneity, but we must also balance credibility concerns.
E.g., If we previously combined states X & Y because they are similar, but a law changes in state Y,
we may want to separate them.

Berquist and Sherman also discuss grouping claims data according to the size of the claim. Claims departments
focusing more or less attention on large claims can shift payment patterns by claim size. For example, focusing
more on large claims could result in an overall claim count settlement slowdown (measured by counts) because
the more numerous small claims may not settle at the same rate as in the past because they are receiving less
attention, which also increases the risk smaller claims may become larger claims more frequently than in the

Spring 2020 © James Bedford, FCAS Page 85


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

past. Large claims may settle faster resulting in less case outstanding at later maturities, and more paid claims
at earlier maturities.

Berquist-Sherman Adjustment Techniques

There will be times when the actuary cannot resolve the data issues through data selection or rearrangement.
Two such cases covered by Berquist and Sherman are when there is a change in case outstanding adequacy
levels or when there is a shift in claim settlement speed (in recent periods, otherwise we could ignore the
historic diagonals and perform the analysis using the latest diagonals).

Changes in Case Outstanding Adequacy – Reported Claims Triangle Adjustment

Detecting Changes in the Adequacy Level of Case Outstanding Reserves

If we suspect there has been a change in case outstanding adequacy—through discussions with the claims
department or through examining the data—we can test this by looking at a triangle of:

1) Average case outstanding:

( – )/( – )

Evaluate this average down each column. We are looking for increases that are larger than would
be expected from inflation alone, such as jumps or very large trends.

2) Ratio of paid to reported claims:

If case adequacy increases, we would expect to see the ratio of paid to reported claims decrease in
the same locations of the triangle that the case adequacy increases.

3) Average paid claims:

Paid claims do not contain case reserves, but we evaluate paid claims down each column to use as
a barometer for severity trend, which we compare to the trends in average case outstanding and
reported claims. If the average case outstanding trends are larger than the trends in average paid
claims it is a sign that case outstanding may be increasing, especially if the average paid claim
trend is similar to the industry trends.

4) Average reported claims:

Evaluate this average down each column. Reported claims contain the case reserves, so we are
looking for jumps or large trends just as we were in the average case outstanding triangle. (This
check is less useful since we have already looked at the trends in average case outstanding and
average paid claims, and average reported claims are simply a mixture of the two, so we don’t
really gain any new information)

Spring 2020 © James Bedford, FCAS Page 86


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

The following exhibits show each of the four diagnostic triangles for an example company. In the Average Case
Outstanding triangle, we observe a very high trend rates, along with a high R2 (giving us more confidence in the
trend fit). The latest two diagonals are also significantly higher than the prior diagonals which leads us to
believe case adequacy has increased. In the second triangle, there does appear to be a decrease in the ratios of
paid to reported claims, but the results are volatile so it is difficult to draw a definitive conclusion. The trends
in the Average Case Outstanding triangle are much higher than the trends in the Average Paid claims triangle
—as such we can feel fairly confident the case adequacy is increasing. The trends in the Average Reported
claims triangle are larger than the trends in the Average Paid claims triangle which also supports an increase
in case outstanding adequacy.

Development techniques assume claims will continue to develop in a similar manner in the future. Changes in
case adequacy levels violate this assumption—results will be unreliable if adjustments are not made:
If case adequacy increases the unadjusted reported claims development estimate will overstate the future
CDFs and ultimate claims because the selected CDFs will be based on historical data that was less adequate
(more IBNR, higher CDFs) than the current more adequate reported claims in the latest diagonal. Also, the
increase in case adequacy will artificially inflate the age-to-age factors on the effected diagonals, again
overstating the CDFs and ultimate claims.
Conversely, if case adequacy decreases the unadjusted reported claims development estimate will
understate the future CDFs and ultimate claims because the selected CDFs will be based on historical data
that was more adequate (less IBNR, lower CDFs) than the current less adequate reported claims in the
latest diagonal. Also, the decrease in case adequacy will artificially understate the age-to-age factors on the
effected diagonals, again understating the CDFs and ultimate claims.

Accident Unadjusted Average Case Outstanding as of (months)


Year 12 24 36 48 60 72 84 96
2008 3,701 5,660 9,262 10,151 11,745 16,627 19,238 21,423
2009 7,250 10,635 12,960 14,221 17,067 23,411 24,551
2010 5,877 8,122 10,613 14,373 21,706 29,044
2011 8,324 11,433 15,499 25,040 28,019
2012 10,124 13,785 30,223 33,266
2013 8,261 22,477 34,402
2014 11,176 32,160
2015 13,028

Annual Change based on Exponential Regression Analysis of Severities and Accident Year
15.6% 29.5% 31.1% 34.2% 33.0% 32.2% 27.6%
Goodness of Fit Test of Exponential Regression Analysis (R-Squared)
80.0% 89.5% 85.8% 94.1% 98.9% 98.3% 100.0%

Accident Ratio of Unadjusted Paid Claims to Reported Claims as of (months)


Year 12 24 36 48 60 72 84 96
2008 0.043 0.079 0.135 0.196 0.269 0.391 0.552 0.673
2009 0.009 0.049 0.119 0.159 0.287 0.447 0.589
2010 0.054 0.096 0.120 0.164 0.269 0.366
2011 0.006 0.042 0.119 0.171 0.303
2012 0.019 0.042 0.072 0.153
2013 0.020 0.047 0.099
2014 0.016 0.032
2015 0.013

Spring 2020 © James Bedford, FCAS Page 87


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

Accident Unadjusted Average Paid Claims as of (months)


Year 12 24 36 48 60 72 84 96
2008 402 539 2,971 8,620 9,199 12,669 17,084 16,634
2009 110 919 5,487 9,129 12,403 18,452 19,533
2010 706 1,115 5,644 4,928 12,994 14,948
2011 161 862 5,782 9,477 14,085
2012 724 541 4,003 11,709
2013 518 1,394 7,635
2014 517 1,494
2015 525

Annual Change based on Exponential Regression Analysis of Severities and Accident Year
12.9% 12.0% 11.5% 6.7% 14.2% 8.6% 14.3%
Goodness of Fit Test of Exponential Regression Analysis (R-Squared)
18.3% 35.3% 37.9% 10.1% 84.6% 19.3% 100.0%

Accident Unadjusted Average Reported Claims as of (months)


Year 12 24 36 48 60 72 84 96
2008 2,733 3,239 7,207 9,809 10,931 14,816 17,986 17,947
2009 4,594 6,986 11,149 13,060 15,404 20,899 21,323
2010 4,210 5,065 9,602 10,936 18,391 21,591
2011 6,451 7,535 12,925 19,554 21,561
2012 8,123 6,820 20,558 25,950
2013 6,378 13,088 25,556
2014 8,372 19,410
2015 9,906

Annual Change based on Exponential Regression Analysis of Severities and Accident Year
17.4% 28.1% 27.4% 26.5% 24.8% 20.7% 18.6%
Goodness of Fit Test of Exponential Regression Analysis (R-Squared)
84.9% 82.1% 90.6% 82.5% 96.3% 81.4% 100.0%

Mechanics of the Case Outstanding Adjustment

Our goal is to restate the Reported Claims triangle so that each point has the same case adequacy level as the
latest period (latest diagonal of the triangle). We do this by first restating the Average Case Outstanding triangle
by trending back the latest diagonal. We select the trend rate by looking at the observed severity trends in the
Average Paid Claims triangle and at industry data. In this example we select a 15% severity trend.

Accident Adjusted Average Case Outstanding as of (months)


Year 12 24 36 48 60 72 84 96
2008 4,898 13,904 17,104 19,020 18,423 21,961 21,349 21,423
2009 5,633 15,989 19,669 21,873 21,186 25,255 24,551
2010 6,477 18,387 22,620 25,154 24,364 29,044
2011 7,449 21,145 26,013 28,927 28,019
2012 8,566 24,317 29,915 33,266
2013 9,851 27,965 34,402
2014 11,329 32,160
2015 13,028 ,
Selected Annual Severity Trend Rate 15% .

Now that we have an adjusted Average Case Outstanding triangle, we can restate the Adjusted Reported Claims
triangle as:
+
where
= –

Spring 2020 © James Bedford, FCAS Page 88


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

See the Excel exhibits for a full example of this calculation (paid claims, reported claim counts, and closed claim
count triangles are not shown in this example).

Accident Adjusted Reported Claims as of (months)


Year 12 24 36 48 60 72 84 96
2008 3,793,504 12,084,942 18,563,821 25,924,316 23,516,364 24,979,245 24,016,864 23,506,000
2009 3,760,482 15,830,500 24,615,996 33,169,802 30,722,141 33,362,729 32,216,000
2010 5,982,185 25,583,831 41,384,825 50,323,342 46,191,356 48,377,000
2011 7,819,355 33,794,110 51,361,061 64,559,286 61,163,000
2012 9,533,246 34,585,431 49,667,342 73,733,000
2013 10,348,458 41,241,243 63,477,000
2014 13,102,479 48,904,000
2015 15,791,000

We then select CDFs and develop the Adjusted Reported Claims to ultimate like normal. For illustrative
purposes we will compare the unadjusted and adjusted CDFs and ultimate claim selections to see the effect of
the adjustment, and confirm the results are in line with expectations. Note, the latest diagonal of the triangle
is not adjusted, so Col (3) in the following table is the reported claims on the latest diagonal of both the
Adjusted and Unadjusted Reported Claim triangle.

Projected Ultimate Claims


Age of CDF to Ultimate Using Dev. Method with
Accident Accident Year (Adjusted) Adjusted Adjusted
Year at 12/31/76 Reported Reported Reported Reported Reported
(1) (2) (3) (4) (5) (6) = (3)*(4) (7) = (3)*(5)
2008 96 23,506,000 1.000 1.000 23,506,000 23,506,000
2009 84 32,216,000 1.027 0.979 33,085,832 31,539,464
2010 72 48,377,000 1.080 0.944 52,266,704 45,656,084
2011 60 61,163,000 1.303 1.005 79,693,384 61,474,940
2012 48 73,733,000 1.524 0.930 112,403,868 68,550,870
2013 36 63,477,000 2.291 1.246 145,443,637 79,081,019
2014 24 48,904,000 4.402 1.911 215,253,430 93,459,963
2015 12 15,791,000 11.145 7.465 175,986,370 117,875,378
Total 367,167,000 837,639,227 521,143,718

We suspected that case adequacy was increasing, so we would have expected the unadjusted estimate to
overstate the ultimate claims and CDFs, which is in fact what our results show.

Potential difficulty with the Case Outstanding Adjustment

The method is very sensitive to the selected trend rate. Observed trends can be very volatile, making the trend
selection difficult and highly judgmental.

Changes in Claims Settlement Rate – Paid Claims Triangle Adjustment

Detecting Changes in the Rate of Claim Settlements

If we believe there has been a change in claim settlement speed we can test this by looking at the ratio of closed
to reported claim counts. Examine the trends down each column—an increase in this ratio indicates claims are
closing faster while a decrease indicates claims are closing slower.

If claim payments have slowed, the Unadjusted Paid triangle development CDFs will be too low and the
estimated ultimate claims will be understated. This is because the CDFs will be based on historical data
where payments occurred faster (e.g., CDFs imply 40% unpaid but payments have slowed and in reality
60% are unpaid). Also, the decrease in claim settlement rate will artificially deflate the age-to-age factors
on the effected diagonals again understating the CDFs and ultimate claims.

Spring 2020 © James Bedford, FCAS Page 89


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

Conversely, if claim payments have sped up, the Unadjusted Paid triangle development CDFs will be too
high and the estimated ultimate claims will be overstated. This is because the CDFs will be based on
historical data where payments occurred slower (e.g., CDFs imply 40% unpaid but payments speed has
increased and in reality only 30% is unpaid). Also, the increase in claim settlement rate will artificially
inflate the age-to-age factors on the effected diagonals again overstating the CDFs and ultimate claims.

In the following example, we observe the ratio of Closed to Reported Claims Counts decreasing down the
columns, implying a decrease in claim settlement rates over the experience period—therefore a primary
assumption of the claim development method does not appear to hold.

Accident Ratio of Closed to Reported Claim Counts as of (months)


Year 12 24 36 48 60 72 84 96
2008 0.622 0.860 0.926 0.961 0.982 0.991 0.996 0.998
2009 0.609 0.847 0.917 0.957 0.979 0.992 0.996
2010 0.595 0.837 0.917 0.959 0.982 0.991
2011 0.572 0.828 0.910 0.958 0.978
2012 0.566 0.818 0.910 0.951
2013 0.555 0.816 0.893
2014 0.545 0.790
2015 0.528

Mechanics of the Paid Claim Development Adjustment

Berquist and Sherman observe closed claim counts are highly correlated with paid claims. Their adjustment
method will first re-state the Closed Claim Count triangle to be at the current claim settlement rate. They then
use the adjusted Closed Claim Count triangle to approximate the Adjusted Paid Claims triangle, which will then
also be at the current claims settlement rate.

The first step of this adjustment is to calculate a triangle of percentages of ultimate claim counts closed, called
Disposal Rates (cumulative closed claim counts / estimated ultimate claim counts)—this is the same as in
Chapter 11. If there are changes in the rate of claims settlement, we would expect to see trends down the
columns in the Disposal Rate triangle. If claim settlement rates are slowing down, we would expect to see a
decrease in Disposal Rates down the columns (due to a lower % of ultimate claims closed at the given time).
Similarly, we would expect an increase in Disposal Rates down the columns if the settlement rates are speeding
up. In the following triangle we see Disposal Rates are decreasing down the columns, which is consistent with
the closed to reported claim count diagnostic above—both leading us to believe the claim settlement rates are
slowing.

We can calculate the Adjusted Closed Claim triangle at the current settlement rates (3rd triangle below), as the
selected Disposal Rate at each age times each Accident Years estimated Ultimate Claim Counts. Since we want
closure rates restated at the level of the latest diagonal, we simply select the Disposal Rates along the latest
diagonal.

In this example I show the disposal rates for every point in the triangle. In reality we only need calculate the
disposal rates on the latest diagonal, since those are the ones selected and only ones used (unlike in Chapter
11 where we make a judgmental selection).

Spring 2020 © James Bedford, FCAS Page 90


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

Projected
Accident Closed Claim Counts as of (months) Ultimate
Year 12 24 36 48 60 72 84 96 Claim Counts
2008 4,079 6,616 7,192 7,494 7,670 7,749 7,792 7,806 7,821
2009 4,429 7,230 7,899 8,291 8,494 8,606 8,647 8,682
2010 4,914 8,174 9,068 9,518 9,761 9,855 9,945
2011 4,497 7,842 8,747 9,254 9,469 9,690
2012 4,419 7,665 8,659 9,093 9,591
2013 3,486 6,214 6,916 7,803
2014 3,516 6,226 8,051
2015 3,230 7,468

Accident Disposal Rate as of (months)


Year 12 24 36 48 60 72 84 96
2008 0.522 0.846 0.920 0.958 0.981 0.991 0.996 0.998
2009 0.510 0.833 0.910 0.955 0.978 0.991 0.996
2010 0.494 0.822 0.912 0.957 0.981 0.991
2011 0.464 0.809 0.903 0.955 0.977
2012 0.461 0.799 0.903 0.948
9,518/9,945
2013 0.447 0.796 0.886
2014 0.437 0.773
2015 0.433
Selected Disposal Rate by Maturity Age
0.433 0.773 0.886 0.948 0.977 0.991 0.996 0.998

Accident Adjusted Closed Claim Counts as of (months)


Year 12 24 36 48 60 72 84 96
2008 3,386 6,046 6,929 7,414 7,641 7,751 7,790 7,806
2009 3,759 6,711 7,692 8,231 8,482 8,604 8,647
2010 4,306 7,687 8,811 9,428 9,716 9,855
2011 4,196 7,490 8,585 9,186 9,469
2012 4,153 7,414 8,497 9,093
2013 3,379 6,032 6,916
2014 3,486 6,226 . ,
2015 3,230

We now have an Adjusted Closed Claim Counts triangle that we can use to approximate the Adjusted Paid
Claims triangle. The authors use the following simple model to approximate Paid Claims from Closed Claim
Counts:

=
where
Y = (Adjusted) Paid Claims,
a & b are fitted parameters, and
x = (Adjusted) Closed Claim Counts

The authors fit this simple model to each pair of successive points within an accident year between the
Unadjusted Closed Claims Count (the X’s) triangle and the Unadjusted Paid Claims (the Y’s) triangle. Those
parameters (a and b) are than used with the Adjusted Closed Claims triangle (the new X’s) to approximate the
Adjusted Paid Claims triangle (the new Y’s). The latest diagonal does not need to be adjusted since it is at the
level we are adjusting the others to.

If the adjusted closed claims fall within the range of (unadjusted) closed claims of the same accident year, use
the parameters from that timeframe. If the adjusted claims do not fall within the range of (unadjusted) closed
claims of the same accident year, use the parameters from the closest range—that is why the 12-month
adjusted paid claim estimates below use the 24 month parameters. It is important to review my more detailed
explanation while reviewing the examples of the parameterization and approximation process in Exhibit II
Sheet 7 and Exhibit III Sheet 6 of the Excel exhibits—this is testable.

Alternatively, the exam may provide another relationship between closed claim counts and paid claims which
you would use to approximate Adjusted Paid Claims from the Adjusted Closed Claim Count triangle.

Spring 2020 © James Bedford, FCAS Page 91


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

The following example shows the process to approximate the Adjusted Paid Claims triangle after you have
calculated the Adjusted Closed Claim Counts triangle (as done before) and have found the fitted model
parameters (would be given on the exam).

Accident Adjusted Closed Claim Counts as of (months)


Year 12 24 36 48 60 72 84 96
2008 3,386 6,046 6,929 7,414 7,641 7,751 7,790 7,806
2009 3,759 6,711 7,692 8,231 8,482 8,604 8,647
2010 4,306 7,687 8,811 9,428 9,716 9,855
2011 4,196 7,490 8,585 9,186 9,469
2012 4,153 7,414 8,497 9,093
2013 3,379 6,032 6,916
2014 3,486 6,226
2015 3,230
Accident Parameter a for Two-Point Exponential Fit
Year 12 24 36 48 60 72 84 96
2008 356 124 132 198 286 1,034 459
2009 438 181 215 353 778 88
2010 464 244 337 493 370
2011 510 337 506 421
2012 616 468 333
2013 530 220
2014 580
2015
Accident Parameter b for Two-Point Exponential Fit
Year 12 24 36 48 60 72 84 96
2008 0.000411 0.000570 0.000562 0.000508 0.000460 0.000294 0.000398
2009 0.000368 0.000490 0.000469 0.000409 0.000315 0.000568
2010 0.000338 0.000416 0.000381 0.000341 0.000370
2011 0.000354 0.000407 0.000360 0.000380
2012 0.000346 0.000381 0.000421
2013 0.000434 0.000576
2014 0.000444
2015
Accident Adjusted Paid Claims ($000) as of (months)
Year 12 24 36 48 60 72 84 96
2008 1,432 4,272 6,437 8,514 9,604 10,112 10,213 10,256
2009 1,747 5,176 7,845 10,209 11,334 11,696 12,031
2010 1,989 6,236 9,533 12,236 13,544 14,235
2011 2,252 7,229 11,094 13,815 15,383 (. ∗ , )
2012 2,592 8,010 11,918 15,278
2013 2,297 7,265 11,771
2014 2,727 9,182
2015 2,801

Now that we have the Adjusted Paid Claims triangle we select CDFs and develop it to ultimate like normal.
Again, for illustrative purposes we compare the Adjusted Paid CDFs and ultimate claim estimates with those of
the Unadjusted Paid triangle to confirm the differences are as expected.

Spring 2020 © James Bedford, FCAS Page 92


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

CDF to Ultimate Projected Ultimate Claims


Accident Age of (Adjusted) Adjusted Adjusted
Year Accident Year Paid Paid Paid Paid Paid
(1) (2) (3) (4) (5) (8) (9)
2008 96 10,256 1.000 1.000 10,256 10,256
2009 84 12,031 1.006 1.006 12,103 12,103
2010 72 14,235 1.024 1.025 14,577 14,591
2011 60 15,383 1.061 1.073 16,321 16,506
2012 48 15,278 1.154 1.195 17,631 18,257
2013 36 11,771 1.379 1.527 16,232 17,974
2014 24 9,182 1.991 2.348 18,281 21,559
2015 12 2,801 6.170 7.416 17,282 20,772
Total 90,937 122,684 132,019

Again, the latest diagonal of the triangle is not adjusted, so Col (3) in the above table is the paid claims on the
latest diagonal of both the Adjusted and Unadjusted Paid Claim triangle. We concluded that claim settlement
speed was decreasing, so we would have expected the unadjusted estimate to understate the ultimate claims,
and hence would expect our adjusted CDFs and ultimate claim estimate to be higher than the unadjusted CDFs
and ultimate claim estimate, which is in fact what our results show.

The paper also presents an alternative approach for determining CDFs for the Adjusted Paid Claim triangle.
They run a regression (both linear and exponential, separately) on the age-to-age factors, use that to project
future age-to-age factors, and then calculate age-to-ultimate factors. See Exhibit II Sheets 9 and 10 of the Excel
exhibits for more details on this calculation (this wouldn’t be high on my list of things to know for the exam).

Potential issues with the Paid Claim Development Adjustment

The adjustment uses Closed Claim Counts to approximate Paid Claims, so it implicitly assumes that a higher
percentage of claims closed leads to a higher percentage of ultimate claims paid, and vice versa. If there is a
change in mix between large and small claims closed, it is possible for there to be a higher percentage of claim
counts closed, but lower percentage of claims dollars closed—which would violate the assumptions of the
Berquist-Sherman adjustment (as it assumes the historic relationship between closed counts and paid claims
will hold). For the same reason, if there is an increase in the proportion of severe late closing claims the
adjustment will be imperfect.

Making Both Adjustments – Reported Claims Triangle Adjustment

If there has been both a change in claim closure speed and case outstanding adequacy, it is appropriate to make
both adjustments to the Reported Claims triangle (case adequacy does not affect Paid claims, so we would only
need to adjust the Paid triangle for changes in closure speed). To incorporate both adjustments we calculate
the Adjusted Reported Claims triangle as:
+
where the Adjusted Paid Claims are from the paid claims development adjustment and Adjusted Open Claim
Counts are Reported Claim counts minus Adjusted Closed claim counts (also from the paid claims development
adjustment calculation). The adjusted average case outstanding is from the reported claims development
adjustment calculation. Said another way:
Adjusted Paid Claims + (Reported Claim Counts – Adj. Closed Claim Counts) x Adj. Average Case Outstanding
Paid Adj. Paid Adj. Reported Adj.

A full example of making both adjustments is shown in the Excel exhibits in Exhibit III.

Spring 2020 © James Bedford, FCAS Page 93


Estimating Unpaid Claims Using Basic Technique Chapter 13: Berquist-Sherman Technique

Miscellaneous Topics

The paper mentions the actuary can use judgment in selecting the Average Case Outstanding toward the tail of
the triangle (rather than trending the latest diagonal back)—for example the actuary may decide no adjustment
is needed toward the tail because the claims department would have nearly complete information at that time,
so case reserves are less likely to be inadequate at prior valuations.

If there are persistent trends in the age-to age factors down the columns of the Adjusted Reported triangle we
should question the appropriateness of the severity trend selection, or question if there has been a shift in the
size of claims settled at the ages where the trends are present.

If there have been large changes, rely more on recent data when making CDF selections, even after adjustment.

Under all adjustments, the latest diagonal will remain unchanged—only interior diagonals will be restated
because we are adjusting the triangles to the level present in the most recent period (i.e., case adequacy or
settlement speed in the latest diagonal).

The paper mentions the idea of using the adjusted results to calculate a revised Bornhuetter-Ferguson
estimation. One could use the adjusted CDFs in place of the unadjusted CDFs (this concept has been tested at
least twice before) or more thoroughly use the adjusted results as the initial ultimate claim estimate in the
expected claims method, whose result would then be the revised BF a-priori (this is less likely to be tested on
the exam).
Conclusion

Berquist and Sherman discuss solutions to handling situations where the assumptions of the traditional claims
development methods may not hold. Our first option should be to treat the problems areas through data
selection and/or rearrangement. If those options are insufficient we can adjust the data.
· If there has been a change in case outstanding adequacy we should adjust the Reported Claims triangle:
.
1) Restate the average case outstanding triangle by trending the latest diagonal
back, using a trend rate based on average paid amounts. Now all points in the average case outstanding
triangle are at the case adequacy level of the latest period (diagonal).
2) Calculate the Adjusted Reported Claims triangle as Unadjusted Paid Claims + Open Claim Counts x
Adjusted Average Case Outstanding (where Open Claims Counts = Reported Claim Counts – Closed Claim
Counts).
3) Perform the development method on the Adjusted Reported Claims triangle.

· If there has been a change in claim settlement speed, we should adjust the Paid Claims triangle
1) Calculate a triangle of disposal rates .
2) Restate the Closed Claim Count triangle by multiplying the disposal rate on the latest diagonal by the
Selected Ultimate Counts (uses disposal rates on the latest diagonal, not a selection).
3) Use the given relationship between Closed Claim Counts and Paid Claims to estimate the Adjusted Paid
Claims triangle from the Adjusted Claim Count triangle (calculated in the prior step). The text uses the
formula: = ∗ ∗ where a & b are fitted parameters
and would be given on the exam. The exam may provide an alternative relationship between closed
claim counts and paid claims which you would use to approximate Adjusted Paid Claims from the
Adjusted Closed Claim Count triangle (e.g., linear interpolation or another simple model).

· If there has been a change in both case outstanding adequacy and claims settlement speed, we should
adjust the Paid Claims triangle for the change in claims settlement speed and the Reported Claims triangle
for both the change in case adequacy and claim payment speed.

=
+ ( – . ) . .

Spring 2020 © James Bedford, FCAS Page 94


Estimating Unpaid Claims Using Basic Techniques Chapter 14: Recoveries

Estimating Unpaid Claims Using Basic Techniques

Chapter 14 – Recoveries: Salvage and Subrogation and Reinsurance


Introduction
Subrogation and salvage (S&S or Sub-Sal) are two of the most common insurance recoveries. In the event of a
total loss, the insurer pays the insured the full amount of loss, but is entitled to the damaged property which
they can sell—this is salvage. Subrogation is any recovery by the insurer of amounts paid to a covered insured
from a third-party responsible for the damage. Reinsurance recoverables are also discussed at the end of this
chapter.
Subrogation and Salvage
Companies can handle the record keeping of S&S data in various ways. Some may maintain detailed information
by recovery type, while others may combine all recoveries. Some insurers only record recovery payments
(received) and do not estimate case reserves for future recoveries. Some insurers may simply record recoveries
as a negative payment and not maintain separate data records. As always, it is crucial for the actuary to
understand the data they are using when analyzing S&S.
Salvage is generally associated with property coverages which tend to be short tailed (fast reporting and
settling) so salvage generally shows up earlier in the triangle. Subrogation is generally associated with liability
coverages which are generally longer tailed (slower reporting and settling) so subrogation generally shows up
later in the triangle. Both can lead to age-age factors less than 1.0 since the recoveries occur after payments
have been made.
The text presents two methods for estimating expected S&S recoveries. The first is to simply develop S&S
triangles to ultimate using the development method. If the data is available, we would want to develop both
reported S&S and received S&S triangles to ultimate (received = “paid” S&S, but the payment is to the insurer
from the third party, so it is really a negative payment).
The second method develops the ratio of received (“paid”) S&S to paid claims. The following example shows
the triangle of received S&S to paid claims in Table 1, along with selected development factors below. Col (3)
of Table 2 shows the projected ultimate ratio of received S&S to paid claims—we observe the ratio for AY 2016
is low compared to other years. This could be due to a change in data recording or an unusually large claim (in
the denominator of the ratio), so we judgmentally select a value for AY 2016 based on the prior years’ ratios.
The projected ultimate S&S is then the selected ultimate ratio times the estimated ultimate claims. The text
assumes claims analyzed thus far are gross of S&S so we have already come up with this estimate separately. If
claims analyzed previously are net of S&S we won’t need to explicitly remove recoveries.
Table 1 Table 2 Ratio of
Received S&S to Projected Selected Projected
Accident Paid Claims CDF Ultimate Ult. S&S Est. Ult. Ultimate
Accident Ratio of Received S&S to Paid Claims Year Age at 12/31/16 to Ult. Ratio Ratio Claims S&S
Year 12 24 36 48 60 (1) (2) (3) (4) (5) (6)
2012 0.235 0.353 0.355 0.355 0.355 2012 60 0.355 1.000 0.355 0.355 14,536 5,160
2013 0.243 0.338 0.341 0.340 2013 48 0.340 1.000 0.340 0.340 16,837 5,725
2014 0.233 0.326 0.334 2014 36 0.334 1.000 0.334 0.334 16,952 5,662
2015 0.217 0.354 2015 24 0.354 1.009 0.357 0.357 16,893 6,034
2016 0.210 2016 12 0.210 1.499 0.315 0.345 16,453 5,676
Total 81,671 28,257
Accident Age-to-Age Factors
Year 12 - 24 24 - 36 36 - 48 48 - 60 To Ult Column Notes:
2012 1.502 1.008 1.000 1.000 (1) Latest diagonal of triangle
2013 1.393 1.008 0.999 (2) Cumulative CDF
2014 1.397 1.025 (3) = (1) * (2)
2015 1.632 (4) = Selected based on (3)
2016 (5) = Estimated Separately (given here)
(6) = (4) * (5)
Simple Avg. 1.481 1.014 1.000 1.000
Selected 1.486 1.009 1.000 1.000 1.000
Cumulative 1.499 1.009 1.000 1.000 1.000

Spring 2020 © James Bedford, FCAS Page 95


Estimating Unpaid Claims Using Basic Techniques Chapter 14: Recoveries

The advantages of this method are that the development factors are not as highly leveraged because we are
using a ratio, and we can judgmentally select ultimate S&S ratios for recent years if they appear out of line. Our
estimate of ultimate S&S will be subtracted from our ultimate claim estimates to find our ultimate claims net of
S&S. A disadvantage is that any error in our ultimate claim estimates will affect the accuracy of our ultimate
Sub-Sal estimate.

Reinsurance Recoveries

Actuaries are often required to estimate ultimate claims both gross of reinsurance and net of reinsurance (e.g.,
for annual statement purposes). To do this, we can estimate gross claims and ceded claims, then find net claims
as the difference or we can estimate gross claims and net claims, then find ceded claims as the difference. The
decision to analyze net or ceded with gross claims will be based upon data availability, volume, and quality,
along with the actuary’s preferences. Sometimes the insurer’s data systems set up will determine what data is
available. For example, if ceded claims are coded in a separate system then finding net claims may not be
possible (at the individual claim level).

It is important for the actuary to be aware of the implied relationships between gross, ceded, and net claims
throughout the analysis, including at the beginning while reviewing and reconciling the data, during the
analysis while preforming the methods, and at the end when making final ultimate selections across methods.
The selections should be consistent with the reinsurance program(s) in place.

The most basic check is that net claims and premiums are less than or equal to gross claims and premiums. If
there is only a quota share (QS) in place, the actuary should review a triangle of the ratios of net claims to gross
claims to ensure the implied QS percentages by year line up with expectations. If there are excess of loss
reinsurance policies in place, the actuary should review individual large claims to ensure the limits are applied
as expected. The actuary should ensure judgmental selections are consistent between net and gross—e.g., the
gross tail factor will generally be greater than or equal to the net tail factor (equal if only a QS in place).
Additionally, the actuary should review implied claims ratios, frequencies, and severities for reasonableness,
along with resulting gross, ceded, and net claims estimates for appropriateness (compared to expectations
based on the reinsurance policies in place).

We should also be aware of aggregate (or stop loss) coverage spanning multiple lines of business. In such a
situation, we will generally analyze the data prior to the aggregate coverage, and adjust for such coverage in
the final steps. The following example illustrates a scenario where the insurer has a $4M aggregate limit for
AYs 2012-2014 combined, and $2M for AY 2015 and 2016 separately, across all lines of business. As such, the
total selected ultimate claims given in Col (1) are the sum across all lines of business. The sum of the total
selected ultimate claims across AYs 2012-2014 is greater than $4M, so the cap is applied resulting in $4M in
ultimate claims net of the aggregate limit for those years. AY 2015 does not hit the limit, but AY 2016 does.
Ultimate claims net of the aggregate limit are shown in Col (3).

Table 3 Ultimate
Accident Tot. Selected Aggregate Claims Net of
Year Ult. Claims Limit Agg. Limit
(1) (2) (3)
min[(1),(2)]
2012 1,254,865
2013 1,354,897 4,000,000 4,000,000
2014 1,658,481
2015 1,975,842 2,000,000 1,975,842
2016 2,046,811 2,000,000 2,000,000
Total 8,290,896 7,975,842

Spring 2020 © James Bedford, FCAS Page 96


Estimating Unpaid Claims Using Basic Techniques Chapter 14: Recoveries

Conclusion

Sub-Sal represents money the insurer expects to recover after making claim payments. Insurers will generally
hold reserves net of Sub-Sal, so they serve to reduce the amount held for unpaid claims. The text’s examples
and explanation assumed claims data is gross of Sub-Sal (before applying recoveries)—in practice it isn’t
unusual to evaluate claims net of Sub-Sal, in which case there would be no explicit reduction for these
recoveries since they are already “baked into” the estimates. The text did eluded to evaluating claims net of
Sub-Sal when pointing out claims age-age factors can be less than 1.0 due to Sub-Sal (this wouldn’t occur if
claims were gross of S&S). On the exam, be aware of the basis of claims (net or gross of Sub-Sal) on these types
of problems.

Spring 2020 © James Bedford, FCAS Page 97


Estimating Unpaid Claims Using Basic Techniques Chapter 15: Evaluation of Techniques

Estimating Unpaid Claims Using Basic Techniques

Chapter 15 – Evaluation of Techniques


Introduction

At this stage of our analysis we will have estimates from numerous methods, and we must select final ultimate
claim estimates for each accident year. It is common for there to be sizable differences between the estimates
produced by the various methods. The range of the estimates from the various methods may give us some idea
of the range of possible outcomes, and the sensitivity of our estimate to varying assumptions of the
methodologies. To make final selections by AY we will use our knowledge of the book of business (through our
discussions with management, diagnostics, and analysis) to select the appropriate method to use in each
situation based on our knowledge of each methods assumptions, strengths and weaknesses, and when each
will produce an accurate estimate.

Selection and Evaluation

When reviewing results by method the actuary should be able to explain significant differences in estimates
between methods. The differences will be due the methods various assumptions being different than what is
actually occurring in the data. Reconciliation of result differences can be a large task, but often results in
important new insights into the book of business.

To check the reasonableness of final selections the actuary should review implied claims ratios, severities, pure
premiums, and claim frequencies (to exposures, or premium if exposures unavailable). Additional valuable
checks include reviewing implied IBNR to open and unreported claims—i.e., IBNR/[Ult Counts – Closed
Counts]. Completing such a review should either give the actuary more confidence in the selections, or lead the
actuary to seek additional information.

If sufficient data is available, we can retroactively test the accuracy of each method to assist in choosing which
methods to select.

The range of estimates produced from the various methods will be smaller for more stable segments (since
each method produces an accurate estimate in the steady state), while the range will be large for changing or
variable segments.

Generally, the actuary will select estimates from different methods or straight/weighted averages of methods
by AY. It is important to remember there is no single right way to select ultimate claims—the process is highly
dependent on the actuary’s judgment.

When making final selections, we should know why we are excluding certain methods from consideration (e.g.,
the assumptions don’t hold for the segment being evaluated). If the remaining estimates are close we can be
more confident in making the selection, if not we should seek more information to understand the differences.

Monitoring and Interim Estimates

After making our selections we will want to monitor actual results over subsequent periods—deviations from
expectations are one of the most important diagnostics for evaluating the accuracy of unpaid claim estimates
and making more accurate estimates in the future. Additionally, monitoring actual versus expected claims
results is important to insurers for financial reporting, planning, and for pricing and other strategic decisions.

We can estimate reported claims in the next period as (where t is the beginning age, and it progresses by the
length of the period; e.g., t=24 and t+1=36):

% −%
, =( . − )
%

Spring 2020 © James Bedford, FCAS Page 99


Estimating Unpaid Claims Using Basic Techniques Chapter 15: Evaluation of Techniques

Similarly, we can estimate paid claims in the next period as:

% −%
, =( − )
%

Each calculation can be done across each AY and summed up to find expected total reported or paid claims for
the period. The first part of each equation is the expected unreported or unpaid claims while the right-hand
side is the percentage of unreported or unpaid claims expected to be reported of paid in the next period.
Alternatively, you can think of the unreported or unpaid claims divided by the %Unreported or % Unpaid as a
“pseudo” ultimate—which is then multiplied by the percent of total claims expected to be reported of paid in
the next period. This logic is similar to estimating incremental closed claims in the third frequency-severity
method in Chapter 11 and estimating unpaid claims in the second case development method in Chapter 12.

If our ultimate selections are based on the development method we can estimate the percentages using the
selected CDFs, as done previously (i.e., % = 1.0/ and % = 1.0 − 1.0/ ).

Ultimate selections are often not based on the reported or paid development methods alone—in such situations
we may want to estimate reporting and payment patterns using an alternative method. A commonly used
method is to compare the paid and reported claim triangles to ultimate claim selections, as shown below. The
ratio of reported and paid claims to ultimate selections are calculated, and we select final reporting and
payment patterns. These percentages can then be used in the formulas above, rather than the percentages
implied by the CDFs.

Table 1 Table 2

Accident Reported Claims Selected Accident Paid Claims Selected


Year 12 24 36 48 60 Ultimate Year 12 24 36 48 60 Ultimate
2012 16,995 35,180 38,866 51,707 56,288 59,780 2012 2,521 9,898 20,950 33,439 42,811 59,780
2013 18,674 37,432 50,340 55,655 69,726 2013 3,243 12,219 27,073 40,026 69,726
2014 19,066 36,783 48,804 70,632 2014 3,531 11,778 22,819 70,632
2015 19,477 31,732 65,300 2015 3,529 11,865 65,300
2016 18,632 63,491 2016 3,409 63,491

Accident Ratio of Reported Claims to Selected Ult. Claims Accident Ratio of Paid Claims to Selected Ult. Claims
Year 12 24 36 48 60 Year 12 24 36 48 60
2012 0.284 0.588 0.650 0.865 0.942 2012 0.042 0.166 0.350 0.559 0.716
2013 0.268 0.537 0.722 0.798 2013 0.047 0.175 0.388 0.574
2014 0.270 0.521 0.691 2014 0.050 0.167 0.323
2015 0.298 0.486 2015 0.054 0.182
2016 0.293 2016 0.054

Simple Avg. Simple Avg.


All Years 0.283 0.533 0.688 0.832 0.942 All Years 0.049 0.172 0.354 0.567 0.716
Latest 2 0.296 0.503 0.706 0.832 0.942 Latest 2 0.054 0.174 0.356 0.567 0.716
Selected 28.9% 51.8% 69.7% 83.2% 94.2% Selected 5.4% 17.4% 35.6% 56.7% 71.6%

Often, we will want to monitor development over periods shorter than used to analyze unpaid claims. For
example, we may only analyze unpaid claims annually but may want to monitor reserves quarterly, or we may
analyze unpaids quarterly but want to monitor reserves monthly. In the case of quarterly analysis, it may be
appropriate to interpolate linearly between quarters to find reported or paid percentages on a monthly basis,
but it is likely inappropriate to interpolate linearly between yearly valuations. For example, if we develop age-
age factors annually, we will have estimates of %Reported at 12 months, 24 months, etc., but want to know the
percentages at ages in between. We would expect the percentage reported to increase at a rate faster than
linearly between 12 and 24 months (e.g., if trying to estimate the %Reported at 18 months), and between 24 to
36 months etc. Quarterly factors are obviously closer together than annual factors, so interpolating linearly,
while not perfect, may be acceptable. In practice, you will want to fit a curve to your factors, or perform
additional analysis to estimate them more accurately, but that is beyond the scope of the syllabus.

Spring 2020 © James Bedford, FCAS Page 100


Estimating Unpaid Claims Using Basic Techniques Introduction to Part 4: Estimating Unpaid LAE

Estimating Unpaid Claims Using Basic Techniques

Introduction to Part 4 – Estimating Unpaid Claim Adjustment Expenses


The final two chapters discuss estimating unpaid claim adjustment expenses (LAE). These expenses are needed
to adjust and payout unpaid claims (i.e., if the insurance company stopped writing business today, how much
money do they need to hold to cover the expenses associated with unpaid claims). We need to hold reserves
for these amounts because in addition to paying out future claims we also need to be able to cover the expenses
of doing so and ensuring claims are properly adjusted so unpaid claims don’t grow unchecked.

LAE can be further broken down into expenses that can be allocated to a particular claim (ALAE) and those that
cannot (ULAE). Allocated claim adjustment expenses (ALAE) include items such as legal, expert witness, claims
adjusting, and investigation. Unallocated claim adjustment expenses (ULAE) include items such as salaries,
rent, and computer expense for the claims department.

In 1998 the NAIC created two new categories of LAE for P&C statutory Annual Statement reporting. They are
defense and cost containment (DCC) and adjusting and other (A&O). DCC and A&O are defined by the NAIC, so
definitions are uniform across insurers (the expenses defined as ALAE and ULAE may vary by insurance
company). DCC includes all defense litigation and medical cost containment expenses. A&O expenses include
all claims adjusting expenses. Some insurers now analyze DCC and A&O separately, while others continue to
analyze ALAE and ULAE, then allocate these estimates between DCC and A&O.

Roughly speaking, DCC is similar to ALAE while A&O is similar to ULAE. The methods used to analyze ALAE are
often used to analyze DCC—the same holds true for ULAE and A&O.

Spring 2020 © James Bedford, FCAS Page 101


Estimating Unpaid Claims Using Basic Techniques Chapter 16: Estimating Unpaid ALAE

Estimating Unpaid Claims Using Basic Techniques

Chapter 16 – Estimating Unpaid Allocated Claim Adjustment Expenses


Introduction

This chapter covers estimating unpaid allocated claim adjustment expenses (ALAE). Significantly more analysis
can be performed on expenses that can be assigned to individual claims (ALAE), than those that cannot (ULAE),
since more information exists. All of the techniques discussed thus far in the text can be used to estimate ALAE.
The greatest challenge of completing an ALAE analysis is often obtaining the data. Many insurers only record
ALAE payments and do not estimate case reserves for ALAE (thus reported ALAE amounts would not be
available). Some insurers keep detailed information on the types of ALAE paid (e.g., defense costs, claims
adjusting, investigating) while others may combine all types into a single ALAE bucket. Since certain types of
ALAE develop differently than others (e.g., legal fees v. everything else) we can increase the accuracy of our
estimate by performing a separate analysis by subgroup (legal fees being an important subgroup).

Actuaries sometimes combine claim and ALAE amounts when performing unpaid claim (and ALAE) analysis. It
is important to note for some lines of business development patterns may differ significantly between claims
and ALAE. For example, legal/defense fees may be paid out on an ongoing basis long before a claim payment is
made, while in other situations ALAE payments can lag claim payments. We must understand what is included
in the company’s definitions of ALAE and ensure the definition has not changed over the experience period.

The methods presented in this chapter are very similar to the methods used to estimate Sub-Sal in the previous
chapter.

Estimating Unpaid ALAE

The text discusses two specific techniques in addition to pointing out the methods covered in previous chapters
can also be used to evaluate ALAE separately from claims. The first technique discussed in the text is simply
the development method applied to paid and reported (if available) ALAE. I am not going to cover that any
further here since it is identical to the development method covered in Chapter 7, except applied to ALAE rather
than claims.

The second method develops the ratio of ALAE to paid claims to ultimate, then multiplies this ultimate ratio by
the estimated ultimate claims to find the estimated ultimate ALAE (nearly identically to what we did for Sub-
Sal).

The advantages of developing the ratio of ALAE to paid claims to ultimate are:

1) It recognizes the relationship between ALAE and claim amounts.


2) It will not be as highly leveraged as developing ALAE dollars (same as Sub-Sal).
3) Like in the Sub-Sal example, we can insert actuarial judgment in our selection of the final ratios if
results look out of line.

The disadvantages are any error in our estimated ultimate claims will result in an inaccurate unpaid ALAE
estimate, and some lines of business may have large amounts of ALAE on claims that settle with no payment.

Additionally, this chapter presents the idea of an additive development method, which as the name suggest is
simply the development method using additive factors rather than multiplicative factors. The additive
development method is really only appropriate to use if developing a triangle of ratios (opposed to nominal
values). If ratios are very small at early maturities the additive development method can be more stable than
the multiplicative development method.

The example below demonstrates the second method, using the additive development technique. In Table 1
you can see that the age-age factors are the difference between successive periods, rather than the ratio. As
such, the cumulative development factor is the sum of the selected age-age factors, rather than the product.

Spring 2020 © James Bedford, FCAS Page 103


Estimating Unpaid Claims Using Basic Techniques Chapter 16: Estimating Unpaid ALAE

Table 1

Accident Ratio of Paid ALAE to Paid Claims Only


Year 12 24 36 48 60
2012 0.0053 0.0074 0.0081 0.0086 0.0087
2013 0.0054 0.0072 0.0081 0.0084
2014 0.0049 0.0071 0.0078
2015 0.0053 0.0072
2016 0.0043 0.0084 − 0.0081

Accident Age-to-Age Factors - Additive


Year 12 - 24 24 - 36 36 - 48 48 - 60 To Ult
2012 0.0021 0.0007 0.0005 0.0001
2013 0.0018 0.0009 0.0003
2014 0.0022 0.0007
2015 0.0019
2016

Simple Avg. 0.0020 0.0008 0.0004 0.0001


Selected 0.0020 0.0008 0.0004 0.0001 0.0000
Cumulative 0.0033 0.0013 0.0005 0.0001 0.0000

0.0008 + 0.0005

The projected ultimate ALAE ratio, shown in Col (3), is the sum of the ratio at the current valuation plus the
additive CDF (rather than times the CDF). Similar to the Sub-Sal example, we can interject judgment in the
selected factors—in this case we select the average of the previous years for AY 2016, rather than the low
projected ratio. The estimated ultimate ALAE is then the product of the estimated ultimate claims and the
selected ratio of ALAE to paid claims. We could also perform the analysis in a similar manor using multiplicative
CDFs, as usually done.

Table 2 Ratio of
Paid ALAE to Additive Projected Selected Ult. Estimated Projected
Accident Paid Claims Only CDF Ultimate Paid-to-Paid Ultimate Ultimate
Year Age at 12/31/16 to Ult. Ratio Ratio Claims ALAE
(1) (2) (3) (4) (5) (6)
2012 60 0.0087 0.0000 0.0087 0.0087 62,875 547
2013 48 0.0084 0.0001 0.0085 0.0085 65,378 556
2014 36 0.0078 0.0005 0.0083 0.0083 77,954 647
2015 24 0.0072 0.0013 0.0085 0.0085 86,324 731
2016 12 0.0043 0.0033 0.0076 0.0085 97,541 829
Total 390,072 3,310

Column Notes:
(1) Latest diagonal of triangle
(2) Cumulative CDF (additive)
(3) = (1) + (2)
(4) = Selected based on (3)
(5) = Estimated separately (given here)
(6) = (4) * (5)

Spring 2020 © James Bedford, FCAS Page 104


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Estimating Unpaid Claims Using Basic Techniques

Chapter 17 – Estimating Unpaid Unallocated Claim Adjustment Expenses


Introduction

This chapter discusses several methods for estimating unpaid unallocated claim adjustment expenses (ULAE).
Estimating unpaid ULAE presents a challenge because we have less information about the claims that are
incurring these expenses since these expenses cannot be assigned to individual claims (i.e., since these
expenses cannot be associated with an individual claim we do not know the AY and therefore don’t know the
maturity either). Since ULAE cannot be allocated to individual claims, we cannot group ULAE by accident year—
we can only observe calendar year payments of ULAE. ULAE includes expenses such as general overhead,
handling, administrative, and salary expenses of the claims department.

To predict the future ULAE expenses needed to cover current unpaid claims we can use dollar based or count
based estimates. These methods use either claim dollars or claim counts to estimate ULAE. Ideally data would
be available to allow us to perform both dollar and count based estimates.

ULAE also have a market value—the fees a TPA would charge to take over the claims management of the
current book of business. Self-insurers often set their unpaid ULAE estimate equal to this value.

In general, these methods will vary in how we measure and select the ULAE ratio, and how the ratio is applied
to estimate unpaid ULAE.

Dollar Based Techniques

Dollar based techniques assume ULAE tracks with claims dollars, both in timing and amount. Timing because
they assume ULAE occurs when claims are reported or paid, and amount because they assume a $10,000 claim
will incur 10x as much ULAE as a $1,000 claim. Next, we will cover several dollar based techniques.

Classical (Traditional) Technique

The classical technique uses the ratio of paid ULAE to paid claims to estimate future ULAE. The method selects
the ULAE ratio by reviewing the historic ratios of calendar year paid ULAE to calendar year paid claims. In the
example below we select a ULAE ratio of 5.3% based on the ratio of CY ULAE to CY paid claims in prior years,
shown in Col (4). The classical method applies the ULAE ratio by assuming one half of ULAE is incurred when
opening a claim, and the other half when closing a claim, so we apply 50% of the selected ULAE ratio to case
reserves (since they are already open) and the full ratio to IBNR (since these claims will both open and close in
the future). This results in an unpaid ULAE estimate of $21,060 in our example below.
Table 1
Ratio of
Calendar Paid Paid Paid ULAE to
Year ULAE Claims Paid Claims
(1) (2) (3) (4)
2012 12,115 214,286 0.057
2013 12,218 230,373 0.053
2014 12,242 214,929 0.057
2015 12,418 223,295 0.056
2016 12,577 251,609 0.050
Total 61,570 1,134,492 0.054 Notes:
(4) = (2) / (3)
(5) Selected ULAE Ratio 0.053 (5) Selected based on ULAE ratios in (4)
(6) Case Outstanding at 12/31/16 388,031 (6) Given; all lines combined
(7) Total IBNR at 12/31/16 203,346 (7) Estimated separately; all lines combined
(8) Estimated Unpaid ULAE at 12/31/16 21,060 (8) = (5) x [50% x (6) + (7)]

Spring 2020 © James Bedford, FCAS Page 105


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

The key assumptions of the classical technique are:

1) The ratio of paid ULAE to paid claims has reached the steady state (i.e., it won’t change in the future).
2) One half of ULAE is incurred when opening a claim, and the other half when closing a claim.
3) ULAE on unreported claims is proportional to IBNR and ULAE on open claims is proportional to case
reserves.

The assumption that 50% of ULAE is incurred when opening and the other 50% when closing the claim is likely
not a good assumption—actuaries can somewhat adjust for this by altering the ratios (e.g., 75% to case and the
full amount to IBNR for long tailed lines that require substantial claims handling through the life of the claim
(i.e., opening the claim may only incur 25% of the costs)).

A further enhancement we can make to this method is to estimate pure IBNR, and only apply the full ULAE ratio
to this amount since the remaining amounts of IBNR are claims that have already been opened, so it would be
better to apply 50% of the selected ratio to these amounts since they have already been opened, rather than
the full amount. If in our example we assumed pure IBNR makes up two thirds of the total IBNR, our new unpaid
ULAE estimate would be lower, as shown below.

Table 1a

(5) Selected ULAE Ratio 0.053 Notes:


(6) Case Outstanding at 12/31/16 388,031 (5) Same as in Table 1
(7) Total IBNR at 12/31/16 203,346 (6) Given; all lines combined
(7a) Pure IBNR 135,564 (7) & (7a) Estimated separately; all lines combined
(8) Estimated Unpaid ULAE at 12/31/16 19,264 (8) = (5) x {50% x [(6) + (7) - (7a)] + (7a)}

The text mentions a common approximation of pure IBNR is to assume it equals 5% of the most recent AYs
ultimate claims (I fail to comprehend how this is a reasonable estimate, but I wanted to mention it for
completeness). The text also states we should test any assumptions we make when estimating pure IBNR. One
test would be to estimate pure IBNR as (Ult. Counts – Reported Counts) x Ultimate Severity.

In short, this method does a poor job of estimating unpaid ULAE in most situations—it assumes expenses are
only incurred when opening and closing claims, is inaccurate if volume is growing or declining, if there is
inflation, or if evaluating a long-tailed line of business. It will only give good results for very short tailed and
stable lines of business.

Kittel Refinement (to the Classical Technique)

The Kittel refinement adjusts how we estimate the ULAE ratio (but not how we apply it). Kittel observes ULAE
is also incurred when claims are reported, not just when claims are paid (as included in the denominator when
measuring the ULAE ratio in the classical technique). The classical technique recognizes this in the application
of the ratio, but not in measuring the ratio itself (it instead assumes a steady state where paid claims are
approximately equal to reported).

Instead of using calendar year paid claims in the denominator of the ratio, the Kittel refinement uses the
average of CY paid claims and CY reported claims (using the definition where CY reported = CY paid + Δ case +
Δ IBNR = CY paid + Δ unpaid). In measuring the ULAE ratio, this refined method assumes half of ULAE is
proportional to paid claims and half is proportional to reported claims.

The mechanics of the Kittel refinement are illustrated in the following example:

Spring 2020 © James Bedford, FCAS Page 106


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Average of Paid ULAE to


Calendar Paid Paid Reported Paid and Avg Paid and
Year ULAE Claims Claims Inc. Claims Inc. Claims
(1) (2) (3) (4) (5) (6)
2012 12,115 214,286 412,337 313,312 0.039
2013 12,218 230,373 379,249 304,811 0.040
2014 12,242 214,929 335,890 275,410 0.044
2015 12,418 223,295 333,564 278,430 0.045
2016 12,577 251,609 366,179 308,894 0.041
Total 61,570 1,134,492 1,827,219 1,480,856 0.042

(7) Selected ULAE Ratio 0.040


(8) Case Outstanding at 12/31/16 388,031
(9) Total IBNR at 12/31/16 203,346
(10) Estimated Unpaid ULAE at 12/31/16 15,894

Notes:
(5) = Average of (3) and (4).
(6) = (2) / (5)
(7) Selected based on ULAE ratios in (6)
(8) Given; all lines combined
(9) Estimated separately; all lines combined
(10) = (7) x [50% x (8) + (9)]

If CY reported claims are larger than CY paid claims the Kittel estimate will be smaller than that of the classical
technique (and vice-versa, since they are both in the denominator). Like with the classical technique, we can
improve our estimate by splitting out pure IBNR and applying 50% of the ratio to everything except the truly
unreported piece (pure IBNR), and 100% only to the pure IBNR.

The key assumptions of the Kittel refinement are:

1) ULAE is incurred as claims are reported, even if no payments are made.


2) Calendar year ULAE payments are related to both the reporting and payment of claims.
3) ULAE on unreported claims is proportional to IBNR and ULAE on open claims is proportional to case
reserves.

While the Kittel technique addresses the distortion in the classical technique when the insurer is growing (or
shrinking), it still measures and selects the ULAE ratio assuming 50% of ULAE is incurred when opening and
50% while closing a claim (we can select a different allocation when applying the ratio as discussed in the
classical discussion).

Conger and Nolibos—Generalized Kittel

The generalized Kittel allows judgment in setting the weights used to calculate the ULAE ratio. The Kittel
refinement selects the ULAE ratio assuming 50% of ULAE is incurred when opening a claim and 50% when
closing a claim and accomplishes this by using a simple average of CY paid claims and CY reported claims in the
denominator of the ratio.

The generalized Kittel adds a third component to measure the costs associated with maintaining the claims that
remain open. This maintenance cost can be a large portion of the total expense in long tailed lines such as
workers’ compensation. Conger and Nolibos believe handling larger claims requires more resources, so they
use dollars rather than counts (like the previous methods).

Spring 2020 © James Bedford, FCAS Page 107


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

The key assumptions of the generalized Kittel approach include:

1) ULAE costs are related to claims dollars (same for all dollar based methods, but not explicitly listed for
the prior two in the text).
2) ULAE costs for opening claims are proportional to the ultimate estimate of claims reported in the
period (RY ultimate).
3) ULAE costs for maintaining claims are proportional to payments made (CY paid claims).
4) ULAE costs for closing claims are proportional to ultimate costs on claims closed in the year.

The “claims basis” (denominator of the ULAE ratio) is then equal to:

∗ + ∗ + ∗

where:

U1 is the assumed percentage of ultimate ULAE spent opening claims;


U2 is the assumes percentage of ultimate ULAE spent maintaining claims;
U3 is the assumed percentage of ultimate ULAE spent closing claims (since the generalized method allocates
costs to maintaining claims, it is standard to assign no additional costs to closing claims).

The example below assumed 60% of ULAE is spent opening claims, while 40% is spent maintaining claims (and
there is no additional cost to close a claim).

Table 3-- Conger Nolibos 60/40

Calendar Paid Report Year Paid Claims ULAE


Year ULAE Ult. Claims Claims Basis Ratio
(1) (2) (3) (4) (5) (6)
2012 12,115 258,741 214,286 240,959 0.050
2013 12,218 268,102 230,373 253,010 0.048
2014 12,242 275,070 214,929 251,014 0.049
2015 12,418 281,072 223,295 257,961 0.048
2016 12,577 285,170 251,609 271,746 0.046 Notes:
Total 61,570 1,368,155 1,134,492 1,274,690 0.048 (3) Estimated separately; all lines combined
(5) = .60*(3) + .40*(4)
(7) Selected ULAE Ratio 0.048 (6) = (2) / (5)
(8) Ultimate Claims 2,051,885 (7) Selected based on ULAE ratios in (6)
(9) Indicated Unpaid ULAE Using: (8) Estimated separately; all lines combined
(a) Expected Claim Method 36,920 (9a) = [(7) x (8)] - (2) Total
(b) Bornhuetter-Ferguson Method 37,305 (9b) = (7) x [(8) - (5) Total]
(c) Development Method 37,540 (9c) = {[(8) / (5) Total] - 1.00} x (2) Total

As you can see in Row (9) above, there are 3 ways to estimate unpaid ULAE when using the generalized
technique:
1) The first (9a) is analogous to the expected claims method as it estimates expected ultimate ULAE [(7)
x (8)] and subtracts paid ULAE to date [(2) Total]. The authors do not like this option since it can be
difficult to quantify CY paid ULAE corresponding to only the AYs contained in (8), and it can also result
in unrealistic estimates if claims do not approach the expected ultimate claims in (8).
2) The second (9b) is analogous to the BF method and is the preferred method of the authors (and is most
similar to classical and Kittel methods). It multiplies the selected ULAE ratio by expected remaining
claims (estimated ultimate claims minus the claims basis).
3) The third (9c) is analogous to the development method, using ultimate claims/claims basis as the CDF.
Like the first way, this relies on being able to quantify CY paid ULAE corresponding to the AYs
contained in (8) and can be overly responsive to random fluctuations in paid ULAE (just like the
development method applied to claims is).

Spring 2020 © James Bedford, FCAS Page 108


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Simplification of the Generalized Kittel

Conger and Nolibos note that estimating the RY ultimates and the ultimate cost of claims closed in each year
may not be a trivial exercise, so they present a simplification to the generalized technique where these
estimates are not needed (we didn’t need the estimate of ultimate claim costs closed in each year in our
previous example since we made the common assumption that U3 equals 0).

The first simplification is using estimated ultimate claims by AY as a proxy for the ultimate cost of claims
reported in the calendar year. The second simplification is assuming that since we have allocated costs for
maintaining claims, there are no additional costs to actually close the claim (thus U3 is 0, meaning we don’t need
an estimate of ultimate cost of claims closed in each year since it is multiplied by 0, as in our prior example).
This results in the claims basis being equal to ∗ + ∗ . From there
we find the ULAE ratio like normal—the CY paid ULAE divided by the claims basis.
Unfortunately, the simplification does not extend to the application of the ratio—only to the estimation of the
ratio. The math and notation is confusing in the paper (they use the same notation to mean different things in
the text), so I would just recommend memorizing the resulting formula for our estimated unpaid ULAE when
using the simplification:
= [ + ( . .−∑ )]

The following example shows the simplification to the generalized Kittel using U1 = 60% and U2 = 40%.

Table 4
Cal Year Acc Year Cal Year
Paid Ultimate Paid Claims ULAE
Year ULAE Claims Claims Basis Ratio
(1) (2) (3) (4) (5) (6)
2012 12,115 318,905 214,286 277,057 0.044
2013 12,218 320,581 230,373 284,498 0.043
2014 12,242 312,728 214,929 273,608 0.045
2015 12,418 304,484 223,295 272,008 0.046 Notes:
2016 12,577 295,187 251,609 277,756 0.045 (3) Estimated separately; all lines combined
Total 61,570 1,551,885 1,134,492 1,384,928 0.044 (5) = .60*(3) + .40*(4)
(6) = (2) / (5)
(7) Selected ULAE Ratio 0.044 (7) Selected based on ULAE ratios in (6)
(8) Ultimate Claims 1,551,885 (8) = Col (3) Total
(9) Estimated Pure IBNR 135,564 (9) Estimated separately; all lines combined
(10) Indicated Unpaid ULAE 10,925 (10) = (7) x [60% x (9) + 40% x {(8) - (4) Total}]

Mango-Allen Refinement to the Classical Technique

Mango-Allen present a method we can use if the actual historical CY claims are volatile. The method is simply
the classical technique, but uses expected CY paid claims in place of actual CY paid claims.

If actual CY claim payments are volatile, actual AY claim payments will also be volatile, so the authors suggest
multiplying an expected claims ratio by earned premium to find AY expected claims (like the expected claims
method does), then using industry payment patterns to spread the AY claims out over future CYs, like in the
following table.

Spring 2020 © James Bedford, FCAS Page 109


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Table 5
Direct
Accident Earned Expected Expected Payment % in CY Expected Claims Paid in CY
Year Premium ECR Claims 2013 2014 2015 2016 2013 2014 2015 2016
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)
2013 699,800 60% 419,880 12% 15% 15% 15% 50,386 62,982 62,982 62,982
2014 668,880 60% 401,328 12% 15% 15% 48,159 60,199 60,199
2015 655,520 60% 393,312 12% 15% 47,197 58,997
2016 627,120 60% 376,272 12% 45,153
Total 2,651,320 1,590,792 50,386 111,141 170,379 227,331

Notes:
(3) Estimated in separate analysis
(4) = (2) x (3)
(5)-(8) Based on industry data
(9) to (12) = (4) x [(5) to(8)]

Once we have expected CY payments (total row of Col (9)-(12)) the method is identical to the classical
technique (shown with and without the adjustment for pure IBNR).

Table 6
Paid ULAE
Calendar Paid Expected to Expected
Year ULAE Paid Claims Paid Claims
(1) (2) (3) (4)
2013 12,218 50,386 0.242
2014 12,242 111,141 0.110
2015 12,418 170,379 0.073
2016 12,577 227,331 0.055
Total 49,455 559,236 0.088

(5) Selected ULAE Ratio 0.070 Notes:


(6) Case Outstanding at 12/31/16 388,031 (3) Developed in Table 5, Cols (9) - (12) Total
(7) Total IBNR at 12/31/16 203,346 (4) = (2) / (3)
(8) Pure IBNR at 12/31/16 135,564 (5) Selected based on ULAE ratios in (4)
(9) Estimated Unpaid ULAE at 12/31/16 27,815 (6) Given; all lines combined
w/o pure IBNR adjustment (7)-(8) Estimated separately; all lines combined
(10) Estimated Unpaid ULAE at 12/31/16 25,443 (9) = (5) x [50% x (6) + (7)]
w/ pure IBNR adjustment (10) = (5) x [50% x {(6) + (7) - (8)} + (8)]

The Mango-Allen refinement is useful for insurers with limited or highly volatile claims payment experience as
the method will increase stability. The refinement is unlikely to be necessary, or improve the estimate, for
insurers with sufficient volume.

A disadvantage of the dollar based techniques covered in this section is the assumption ULAE tracks with
magnitude of claims dollars, which may not be the case in reality (e.g., the ULAE on a $1M claim may be less
than the ULAE on 10 $100,000 claims). An additional disadvantage is ULAE becomes a “rider” on the estimate
of unpaid claims, responding to the volatility in our estimate— we do not expect actual unpaid ULAE to respond
fully to these fluctuations. We will explore count based techniques next.

Spring 2020 © James Bedford, FCAS Page 110


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Count Based Techniques

Rather than assuming ULAE is proportional to claim dollars, count based techniques assume ULAE is
proportional to claim counts. We will first explore the generalized approach to claim counts. The approach is
similar to the generalized Kittel, but uses counts rather than dollars. Several of the methods covered in this
section are special cases of the generalized technique. One of the largest challenges of using count based
techniques is obtaining accurate and consistent data.

Generalized Approach to Claim Counts

The generalized approach uses relative costs of opening (v1), maintaining (v2), and closing (v3) claims. The
relative weights are then applied to CY reported, CY open (end of period), and CY closed claims, respectively,
to find a weighted “count basis”:

= ∗ . + ∗ ( )+ ∗

In the following example, we assume opening a claim costs twice as much ULAE as maintaining a claim, and
closing a claim costs ¼ the ULAE as maintaining (i.e., a weight of 2.0 to reported counts, 1.0 to open counts (at
end of the year), and 0.25 to closed counts). We then find the ULAE ratio relative to this count basis.

Table 7

Calendar Paid Reported Open Counts Closed Count ULAE


Year ULAE Counts (in CY) (at end of CY) Counts (in CY) Basis Ratio
(1) (2) (3) (4) (5) (6) (7)
2012 12,115 597 451 546 1,597 7.58
2013 12,218 604 436 619 1,662 7.35
2014 12,242 615 430 621 1,669 7.33
2015 12,418 628 435 623 1,689 7.35
2016 12,577 637 440 632 1,712 7.35
Total 61,570 3,081 2,192 3,041 8,330 7.39

Notes: (8) Selected ULAE Ratio 7.39


(6) = 2.0*(3) + 1.0*(4) + 0.25*(5)
(7) = (2) / (6)
(8) Selected based on ULAE ratios in (7)

After we have selected the ULAE ratio, we must apply it to each future CY separately. This example assumes all
claims will be closed in the next 4 years, shown in Table 8. The unpaid ULAE for each future CY is the selected
ULAE ratio times v1*Reported Counts + v2*Open Counts + v3*Closed Counts. The total expected unpaid ULAE is
the sum of unpaid ULAE by CY, or 12,929 in the example below. We must look at counts by CY separately to
capture the number of open counts at the end of each year.

The text mentions we could vary our selected ULAE ratio by future period (e.g., due to inflation), in such case
we would need to calculate unpaid ULAE by CY (we don’t vary the ULAE ratio by year but do calculate unpaid
ULAE by year in our example below). If we use a fixed ULAE ratio we can sum of the counts across CYs and
apply the formula once to the total counts assuming we don’t otherwise need separate estimates by CY (i.e.,
apply the formula to the total row alone).

Spring 2020 © James Bedford, FCAS Page 111


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Table 8

Calendar Reported Open Counts Closed Unpaid


Year Counts (in CY) (at end of CY) Counts (in CY) ULAE
(1) (2) (3) (4) (5)
2017 215 305 350 6,079
2018 147 177 275 3,989
2019 82 75 184 2,106
2020 37 0 112 754
Total 481 557 921 12,929

Notes:
(2)-(4) Estimated run off of claims occuring on or before the valuation date
(5) = ULAE Ratio *( 2.0*(2) + 1.0*(3) + 0.25*(4))

Note, for the future CYs we should only include counts incurred on or before the valuation date (i.e., only
accidents occurring before the valuations date, regardless if reported or not). Also, claims that are open for
multiple periods will be counted multiple times, which is what we want since we assume ULAE will be incurred
on those claims as long as they are open.

Wendy Johnson Technique

Wendy Johnson’s technique focuses on the relative ULAE costs of the reporting and maintenance of claims. She
points out that we only need to measure the relative costs (for use in the claims basis) rather than actual costs.
The above generalized method is a generalization of her method, where she uses relative weights of v1 = 2.0,
v2 = 1.0, and v3 = 0.0 (i.e., ULAE incurred when a claim is opened is twice as much as incurred maintaining a
claim for a year, and closing the claim costs nothing additional).

Mango-Allen—Staffing Technique

Mango and Allen present a claim based ULAE model that uses open, closed, and pending (OCP) claims along
with a measure of claims staff per OCP to estimate future staffing levels, to predict future ULAE. The method
first predicts future OCP claims, then multiplies that by future expected staffing levels per OCP claim, giving
expected staff, which is multiplied by an estimate of ULAE per staff to estimate future ULAE.

Mango and Allen say their exposure base of OCP claims is appealing because it is a reasonable proxy for future
claims department activity, count based, and uses information that can by derived from a typical reserve study.

Rahardjo

In her paper, Rahardjo recommends incorporating duration into our calculation. She states “As duration
increases, so does the expense of handling the claim for the remainder of the claim’s life.”

Spalla

Spalla recommends using modern systems to track the time spent on various activities performed by the claims
department, and use that more detailed information to perform a more detailed analysis of unpaid ULAE. If
doing this, it is important to retroactively test this method to ensure it gives reasonable results.

Early Count Techniques

An early proposal for a count based ULAE estimation technique, proposed by R.E. Brian, gave equal weight to
opening a claim, maintaining a claim, making a single payment, closing a claim, and reopening a claim. The
actuary would then use the sum of these transactions as the basis with which to measure and predict ULAE.
This proposal set forth future work (covered above) that further refined the weights and transactions used to
predict future ULAE.

Spring 2020 © James Bedford, FCAS Page 112


Estimating Unpaid Claims Using Basic Techniques Chapter 17: Estimating Unpaid ULAE

Triangle Based Techniques

Since actual ULAE by accident year and age are unavailable (since ULAE cannot be mapped to individual claims)
we do not evaluate ULAE in a triangle like we do claims, ALAE, and claim counts. The text mentions the actuary
could make assumptions about ULAE payments based on claim payments, or studies of claims department time
spent by AY, to create a ULAE triangle. While the text says it is theoretically possible to create and evaluate a
ULAE triangle, it points out this is rarely done in practice.

Conclusion

In practice actuaries often only use a single method for estimating unpaid ULAE. The actuary should put
thought into which method(s) are appropriate for their given situation. The actuary should check their
estimates for reasonability by estimating the expected number of future CY payments the ULAE estimate would
cover. We would expect fewer years for short tailed lines and more for longer tailed lines.

Spring 2020 © James Bedford, FCAS Page 113


Basic Ratemaking Notation

The following notation will be used throughout the Ratemaking material—primarily beginning in
Chapter 7. Keep it separate to reference back to later.

X (#) = Exposures

; ($) = Premium; Average premium (P divided by X)

; ($) = Premium at current rates; Average premium at current rates (PC divided by X)

; ($) = Indicated premium; Average indicated premium (PI divided by X)

; ($) = Premium at proposed rates; Average premium at proposed rates (PP divided by X)

L; ($) = Losses; Pure Premium (L divided by X)

; ($) = Loss Adjustment Expense (LAE); Average LAE per exposure (EL divided by X)

; ($) = Fixed underwriting expenses; Average underwriting expense per exposure (EF divided by X)

($) = Variable underwriting expenses

F (%) = Fixed expense ratio (EF divided by PC)

V (%) = Variable expense provision (EV divided by PC)

(%) = Profit percentage at current rates

(%) = Target profit percentage

($) = Current base rate

($) = Proposed base rate

1 , (#) = Current relativity for the ith level of rating variable R1

1 , (#) = Proposed relativity for the ith level of rating variable R1

($) = Current fixed additive fee

($) = Proposed fixed additive fee

Spring 2020 © James Bedford, FCAS Page 115


Basic Ratemaking Chapter 1: Introduction

Basic Ratemaking

Chapter 1 – Introduction
Introduction

Insurance is different from most products in the economy as we do not know its cost up front. Insurance is the
promise to do something (pay losses) in the future if certain events occur during a specified time. The purpose
of this text is to explain the fundamentals of setting insurance prices—also known as ratemaking.

The CAS Statement of Principles Regarding Property and Casualty Insurance Ratemaking provides the
following four principles applicable to the determination and review of property and casualty insurance rates.

Principle 1: A rate is an estimate of the expected value of future costs.

Ratemaking should provide for all costs so that the insurance system is financially sound.

Principle 2: A rate provides for all costs associated with the transfer of risk.

Ratemaking should provide for the costs of an individual risk transfer so that equity among insureds is
maintained. When the experience of an individual risk does not provide a credible basis for estimating these
costs, it is appropriate to consider the aggregate experience of similar risks. A rate estimated from such
experience is an estimate of the costs of the risk transfer for each individual in the class.

Principle 3: A rate provides for the costs associated with an individual risk transfer.

Ratemaking produces cost estimates that are actuarially sound if the estimation is based on Principles 1, 2, and
3. Such rates comply with four criteria commonly used by actuaries: reasonable, not excessive, not inadequate,
and not unfairly discriminatory.

Principle 4: A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially
sound estimate of the expected value of all future costs associated with an individual risk transfer.

Basic Insurance Terms

Exposure is the basis we use to measure risk. This basis varies by insurance line of business. For example, $100
of payroll is almost always used as the exposure base for workers compensation while the exposure for a
homeowner’s policy is simply one (the home itself). There are four ways we can measure exposure:

· Written Exposure: total exposures of policies issued in a given timeframe.


· Earned Exposure: portion of written exposures for which coverage has been provided as of a given
time.
· Unearned Exposure: portion of written exposures for which coverage has not yet been provided as of
a given time.
· In-Force Exposure: the total exposure of policies in-force at a given time.

Exposures are covered in much more depth in Chapter 4.

Premium is the cost the insured pays the insurer for insurance coverage (includes the expected loss on the
policy, all expenses incurred for writing/servicing the policy, plus a profit and contingency margin). Like
exposure, there are four ways we can measure premium:

· Written Premium (WP): total premium of policies issued in a given timeframe.


· Earned Premium (EP): portion of written premium for which coverage has been provided as of a given
time.
· Unearned Premium (UEP): portion of written premium for which coverage has not yet been provided
as of a given time.

Spring 2020 © James Bedford, FCAS Page 117


Basic Ratemaking Chapter 1: Introduction

· In-Force Premium: the total premium of policies in-force at a given time.

Premiums are covered in much more depth in Chapter 5.

Claim is the demand for payment due to a covered event occurring. The individual making the claim is called
the claimant and may be the insured (1st party coverage) or another party claiming injuries or damage covered
by the policy (3rd party coverage). For many coverages the accident causing a claim is sudden, so assigning the
claim to an accident date is straightforward. Sometimes the claim is the result of continuous strain or repeated
exposure to a hazardous condition—in such a case the accident date is often when the damage or loss becomes
apparent.

Sometimes there are long lags between the occurrence of the loss event and the reporting of the claim—during
this time the claim is unreported and also called an incurred but not reported (IBNR) claim. After the claim is
known, but prior to it settling it is considered an open claim.

In the reserving text “claims” represented the dollar amount of claims. In the ratemaking text “claims”
represents the count of claims.

Loss is the dollar amount of compensation paid or payable to a claimant under the terms of the policy. The
reserving text uses the term “claims” for this amount. In practice “loss” and “claims” are often used
interchangeably to mean the dollar amount, but “loss” is more often used in the ratemaking context, so that is
the term used in this text. The terms associated with loss are paid loss, case reserves, reported loss (or case
incurred loss), IBNR/IBNER reserves, and ultimate loss:

· Paid losses are the amounts that have been paid out to claimants.
· Case reserves are reserves set by claims professionals on known claims. Case reserves exclude any
payments already made, and are monitored and adjusted as more information on the claim becomes
available.
· Reported losses (or case incurred losses) are the sum of paid losses and case reserves, or the claims
professionals estimate of ultimate losses on known claims.
· Ultimate loss is the amount of money required to close and settle all claims in a defined group. This
amount differs from reports losses for two reasons:
- Reported losses only account for known claims—there are often unreported claims. The
reserves for unreported claims are estimated by the actuary and are called IBNR (incurred but
not reported) reserves.
- Case reserves are based on information known at the time the reserve is set—as more
information becomes available these estimates sometimes increase. This development on case
reserves is estimated by the actuary and known as IBNER (incurred by not enough reported)
reserves.
· In practice, IBNR and IBNER are often estimated together and simply called IBNR (the broad definition
of IBNR). Therefore, ultimate losses are reported losses plus IBNR (broad). Going forward “IBNR” will
refer to the broad definition unless otherwise noted.
Ultimate

Reported

Paid Case IBNR

Unpaid
Loss Adjustment Expense (LAE) is the expense incurred in settling claims. Some of these expenses can be
allocated to individual claims (ALAE) while others cannot (ULAE). Allocated loss adjustment expenses (ALAE)
include items such as fees for outside legal counsel. Unallocated loss adjustment expenses (ULAE) include items
cannot that cannot be allocated to individual claims, such as salaries of claims department personnel. Loss and
LAE are covered in much more detail in Chapter 6.

Spring 2020 © James Bedford, FCAS Page 118


Basic Ratemaking Chapter 1: Introduction

Underwriting Expenses are incurred in the acquisition and servicing of policies. Underwriting expenses are
usually classified into the following four groups:

· Commissions and Brokerage: the amounts paid to agents or brokers as compensation for generating
business. They are generally a percentage of premium and often vary by new/renewal, the volume of
business and/or the quality of the business written.
· Other Acquisition: the costs of acquiring business, other than commissions and brokerage fees,
including media advertising and mailings.
· General: the remaining costs of the insurance operations and other miscellaneous costs. General
expenses include items such as the home office and personnel salaries.
· Taxes, Licenses, and Fees: includes all taxes and miscellaneous fees paid by the insurer (excluding
federal income tax). Examples include premium taxes and licensing fees.

Underwriting (UW) Profit is the premium minus loss and expense. As the ultimate cost of insurance is
unknown at the time of the sale, the insurance company is undertaking risk for which it must maintain capital.
The insurance company is entitled to a reasonable return on this capital and profit. The underwriting profit is
discussed further in Chapter 7. Additionally, the insurance company may profit on investment income on
reserves (due to the time between premium payment and loss payment). Discussion on the calculation of
investment income is beyond the texts scope.

Fundamental Insurance Equation

The basic economic relationship for the price of a product is:

Price = Cost + Profit

In the insurance context premium is the price, expected losses and expenses are the costs (since we do not
know their ultimate values at the time we write the policy), and UW profit is the profit. Therefore, the
fundamental insurance equation is:

Premium = Losses + LAE + UW Expenses + UW Profit


Price Cost Profit

The goal of ratemaking is to set premiums to the level that balances the fundamental insurance equation in
the prospective period, both in aggregate and at the individual level. Since the ultimate costs are unknown at
the time of the sale, the costs above represent their expected values and are estimated.

It is important to remember ratemaking is prospective—we use historical data adjusted to expected future
levels to predict costs, but we do not try to recuperate prior losses. As we will see in the following chapters
the key in using historical data to estimate future costs is to make adjustments as necessary to project the
various components to the level expected in the period the rates will be in effect.

It is important to balance the fundamental insurance equation both overall, and at the individual (or
classification) level. Overall rate balance can be achieved by uniformly increasing or decreasing rates—this is
called the rate indication and is covered in Chapter 8. Individual rate balance is important to ensure
equitable rates and avoid adverse selection. Ratemaking at the individual (or segment) level is known as
classification ratemaking and is covered in Chapters 9-11.

Spring 2020 © James Bedford, FCAS Page 119


Basic Ratemaking Chapter 1: Introduction

Basic Insurance Ratios

Insurers, regulators, ratings agencies, and investors (among others) often rely on some basic ratios to evaluate
an insurance company’s performance. The following are brief overviews (each will be covered more in-depth
in later chapters).

Frequency is the measure of claims per exposure. Generally, the numerator is reported claims and the
denominator is earned exposure, but this may vary based on the need, so it is important to document the types
of claims and exposures used. Changes in frequency provides information on the incidence of claims and
effectiveness of underwriting changes.

#
=
#

Severity is the average claim cost. Severity may be measured in several ways—paid severity (paid
losses/closed claims) and reported severity (reported losses/reported claims) for example. ALAE may or may
not be included with loss. As such, it is important to document the types of loss and claims used, as well as the
inclusion or exclusion of ALAE. Changes in severity provides information on loss trends and changes in claims
handling.

=
#
Pure Premium (a.k.a loss cost or burning cost) is the average loss per exposure. Generally, pure premium is
calculated using reported (or ultimate) losses and earned premium, and may or may not include ALAE.

=
#

Pure premium is also equal to frequency times severity:

#
= = =
# # #

Changes in pure premium provides information on trends in overall loss costs, which may be due to either, or
a combination of, frequency and severity.

Average Premium is the average premium per exposure. It is important that premiums and exposures are on
the same basis (i.e., written, earned, unearned, or in-force). The prior ratios analyzed the loss side of the
equation while this measures the premium side. Changes in average premium (after adjustment for rate level
changes) can indicate changes in the mix of business (e.g., shifts towards higher or lower cost insureds).

=
#

Loss Ratio is the ratio of loss to premium. As with the other ratios, the definition of the values in the numerator
and denominator may vary by situation, so it is important to document and understand each. Loss ratio also
may or may not include ALAE and ULAE. The most common definition uses reported (or ultimate) losses and
earned premium.

= =

Spring 2020 © James Bedford, FCAS Page 120


Basic Ratemaking Chapter 1: Introduction

Loss Adjustment Expense Ratio is the ratio of LAE (ALAE+ULAE) to total losses (not premium). Companies
use this ratio to determine if claim settlement costs are stable or not, and to compare to other companies.

Since the LAE ratio is relative to loss and not premium, the Loss & LAE ratio is not the sum of the Loss Ratio
and LAE Ratio, but rather the product of the Loss Ratio and one plus the LAE Ratio (assuming the loss ratio only
contains losses in the numerator).

& = (1 + )= 1+

+
= + =

Underwriting (UW) Expense Ratio is the ratio of UW expenses to premium (i.e., the portion of each premium
dollar going towards UW expenses). The general formula is as follows:

UW expenses that are paid up front (Commissions and Brokerage, Other Acquisition, and Taxes, Licenses, and
Fees) are generally measured relative to written premium, while UW expenses that are incurred throughout
the policy (General expenses) are generally measured relative to earned premium, as to align the expense
payments to the premiums associated with those expenses to better estimate expenses on future policy
premium. These individual expense ratios are then added together to get the overall UW expense ratio:
, ,
= +

Operating Expense Ratio (OER) measures the portion of each premium dollar used for LAE and UW expenses.

= +

Note, LAE is divided by Earned Premium here, not Loss as in the LAE Ratio.

Combined Ratio is the primary measure of profitability of a book of business. It is the portion of each premium
dollar going towards losses and expenses. The text gives the general formula as:

= + +

The above formula uses a simplified UW Expense Ratio—we can instead use the more accurate UW Expense
Ratio calculated a moment ago to find the Combined Ratio as:

= + + = +

Spring 2020 © James Bedford, FCAS Page 121


Basic Ratemaking Chapter 1: Introduction

Retention Ratio measures the rate that expiring policies renew:

#
=
#
As with many of the prior ratios, it is important to understand and document what is included in each number
as it may vary by company or use (e.g., some companies may exclude those non-renewed, or passed away, while
others may not). The product management and marketing departments monitor this ratio closely as it is an
indicator of rates for renewal business, or customer satisfaction following other changes in service.

Close Ratio (a.k.a. Hit Ratio, Quote-Close Ratio, Conversion Rate) measures the rate prospective insureds
accept new business quotes:

#
=
#

Like the Retention Ratio and other ratios, there can be many different ways each number is measures (e.g., a
prospective insured receives multiple quotes—one insurer may count that as one quote while another many
count it as multiple). Again, the product management and marketing departments monitor this ratio closely as
it is an indicator of the competitiveness of rates for new business.

Spring 2020 © James Bedford, FCAS Page 122


Basic Ratemaking Chapter 3: Ratemaking Data

Basic Ratemaking

Chapter 3 – Ratemaking Data


Introduction (note, Chapter 2 is not on the syllabus)

Quality data is extremely important to the ratemaking process—the quality of the final rates is largely
dependent on the quality and quantity of data available. This chapter covers types of data and aggregation
methods used in ratemaking.

Most ratemaking involves analyzing current rates—for this purpose we will generally use internal data.
Sometimes, such as when entering a new line of business, internal data may not be available and we will need
to use other internal data that is somewhat related, or use external data.

Internal data

The type of analysis being performed will determine our data needs. When performing an overall rate
indication, we only need aggregated data, while when performing a multivariate classification analysis, we need
individual risk/policy information.

There are generally two types of internal data involved in a ratemaking analysis; risk level information and
accounting information. Risk level information is generally available for exposures, premiums, claim counts,
losses, as well as policy and claim characteristics. Accounting information includes items such as UW expenses
and ULAE which are only available at the aggregate level (some UW expenses would be available at the risk
level, but are generally looked at in aggregate with the rest).

To perform classification ratemaking (ratemaking at the individual or segment level) we must be able to link
up premium and loss information by policy. Most companies keep loss and premium data in separate
databases: the claim database and policy database (respectively).

Policy database

The policy database contains data on the policy, such as insured characteristics and premium paid. The
database is defined by records (i.e., the rows for individual policies or some further division such as coverage
or individual on the policy) and fields (i.e., the columns containing information about each record).

Records (rows) will generally be further subdivided by any changes to the policy, such as an amendment. Fields
(columns) generally included in the policy database include a policy identifier (e.g., policy number), risk
identifier (e.g., vehicle or driver number for auto insurance), relevant dates (e.g., beginning, ending, amendment
dates), premium (either in total or by coverage), exposure (in total or by coverage), and policy characteristics
(e.g., rating variables, UW variables).

The following example illustrates transactions contained in a policy database for four policies:

Policy #1 is an annual policy written on 2/1/16 with a $2,000 premium. Three months later the policy is
canceled resulting in a second entry with -0.75 exposure and -$1,500 premium (since 3/12 is earned, the -0.75
offsets the unearned portion such that the result of the sum of the two is the net amount).

Policy #2 is an annual policy written on 7/1/16 for $1,000. This policy stays in effect for the duration of the
term with no changes.

Policy #3 is an annual policy written 8/1/16 with a premium of $2,000. Halfway through the term the policy is
amended to change the deductible from $500 to $1,000. The second entry for the policy is the offset for the
unearned portion of the original entry. The third entry is the information for the remainder of the term at a
$1,000 deductible—both the exposure (0.5) and premium ($800) are for the partial term only—the full-term
premium with a $1,000 deductible is then $1,600, or 80% of the premium at a $500 deductible.

Spring 2020 © James Bedford, FCAS Page 123


Basic Ratemaking Chapter 3: Ratemaking Data

Policy #4 is an annual policy written 10/1/16 with a premium of $3,000. Nine months in, the policy is amended
as the insured moves from Territory 1 to Territory 3. Again, the second entry is the offset for the unearned
portion of the original entry. The third entry is the information for the remainder of the term written in
Territory 3. Again, both the exposure and premium in the 3rd entry are only for the partial term—the full-term
premium in Territory 3 is then $2,700, or 90% of the premium when written in Territory 1.

Original Original Transaction


Effective Termination Effective ….....Other Rating……… Written Written
Policy Date Date Date Deductible Territory .……Characteristics……. Exposure Premium
1 2/1/16 1/31/17 2/1/16 $500 4 …………………………………. 1.00 $2,000
1 2/1/16 1/31/17 5/1/16 $500 4 …………………………………. (0.75) ($1,500)
2 7/1/16 6/30/17 7/1/16 $1,000 1 …………………………………. 1.00 $2,500
3 8/1/16 7/31/17 8/1/16 $500 2 …………………………………. 1.00 $2,000
3 8/1/16 7/31/17 2/1/17 $500 2 …………………………………. (0.50) ($1,000)
3 8/1/16 7/31/17 2/1/17 $1,000 2 …………………………………. 0.50 $800
4 10/1/16 9/30/16 10/1/16 $500 1 …………………………………. 1.00 $3,000
4 10/1/16 9/30/16 1/1/17 $500 1 …………………………………. (0.25) ($750)
4 10/1/16 9/30/16 1/1/17 $500 3 …………………………………. 0.25 $675

A more advanced data mart would aggregate the policies as follows, showing the exposure and premium net of
cancelations/amendments. Policy #1 is combined into a single record with 0.25 exposure. Policies #3 and #4
are combined into two records each—before and after the policy amendment.

Original Original Last Net Net


Effective Termination Transaction ….....Other Rating……… Written Written
Policy Date Date Date Deductible Territory .……Characteristics……. Exposure Premium
1 2/1/16 1/31/17 5/1/16 $500 4 …………………………………. 0.25 $500
2 7/1/16 6/30/17 7/1/16 $1,000 1 …………………………………. 1.00 $2,500
3 8/1/16 7/31/17 8/1/16 $500 2 …………………………………. 0.50 $1,000
3 8/1/16 7/31/17 2/1/17 $1,000 2 …………………………………. 0.50 $800
4 10/1/16 9/30/16 1/1/17 $500 1 …………………………………. 0.75 $2,250
4 10/1/16 9/30/16 1/0/00 $500 3 …………………………………. 0.25 $675

Claim database

In a claims database there is generally a record (row) for each transaction, and fields (columns) containing
information about the claim. Common claim database fields include the policy identifier (e.g., policy number),
claim identifier (in case there are multiple claims on a given policy), claimant identifier (in case there are
multiple claimants), relevant dates (e.g., date of loss, date of notification, transaction date), claim status (e.g.,
open/closed), event identifier (to identify catastrophes), claim characteristics (e.g., injury type, physician,
prescription, lawyer information), paid losses, case reserves (loss and ALAE, or just loss if case reserves are not
set for ALAE, which is common), paid ALAE, and salvage/subrogation (received).

The sample below illustrates a claim database containing information on four claims occurring on three
policies:

Policy #10 incurs two claims—one on 1/1/16 and the other on 6/15/16. The 1/1/16 claim is reported to the
insurance company on 2/4/16 at which time they set a case reserve of $2,500. On 2/16/16 the insurance
company pays out $2,350 on the claim and $500 in ALAE and closes the claim. The 6/15/16 claim is reported
the next day at which time the company opens the claim and sets an initial case reserve of $5,000. On 6/29/16
the company makes a partial payment of $3,500, reduces the case reserve to $2,000, and pays ALAE of $500.
On 7/15/16 the claim is closed with a $3,000 loss payment, and the company pays an additional $500 in ALAE.
On 10/1/16 the claim is re-opened and a $10,000 case reserve is set. On 10/5/16 the claim is closed again with
a $9,500 loss payment and an additional $1,000 in paid ALAE. Over a month after the claim closed for the second
time the insurance company was able to recover $1,500 in Sub-Sal.

Spring 2020 © James Bedford, FCAS Page 124


Basic Ratemaking Chapter 3: Ratemaking Data

Policy #15 incurs a single claim on 8/5/16, which is reported to the company on 10/15/16 at which time they
open the claim and set a $25,000 case reserve. On 11/7/16 a partial loss payment of $14,000 and ALAE
payment of $1,500 are made, and the case reserve is decreased to $10,000. On 12/9/16 the claim is closed
with a $9,000 loss payment and $1,000 ALAE payment.

Policy #17 incurs a claim on 12/15/16, notifies the insurance company of the claim on the same day, at which
time they set up a case reserve of $650. On 12/22/16 the insurance company pays out $650 and closes the
claim.

Claim Accident Report Transaction Claim ….....Claim……. Loss Case Paid Salvage/
Policy Number Date Date Date Status .……Chars……. Payment Reserve ALAE Subrogation
10 1 1/1/16 2/4/16 2/4/16 Open ……………………. $0 $2,500 $0 $0
10 1 1/1/16 2/4/16 2/16/16 Closed ……………………. $2,350 $0 $500 $0
10 2 6/15/16 6/16/16 6/16/16 Open ……………………. $0 $5,000 $0 $0
10 2 6/15/16 6/16/16 6/29/16 Open ……………………. $3,500 $2,000 $500 $0
10 2 6/15/16 6/16/16 7/15/16 Closed ……………………. $3,000 $0 $500 $0
10 2 6/15/16 6/16/16 10/1/16 Open ……………………. $0 $10,000 $0 $0
10 2 6/15/16 6/16/16 10/5/16 Closed ……………………. $9,500 $0 $1,000 $0
10 2 6/15/16 6/16/16 11/9/16 Closed ……………………. $0 $0 $0 $1,500
15 1 8/5/16 10/15/16 10/15/16 Open ……………………. $0 $25,000 $0 $0
15 1 8/5/16 10/15/16 11/7/16 Open ……………………. $14,000 $10,000 $1,500 $0
15 1 8/5/16 10/15/16 12/9/16 Closed ……………………. $9,000 $0 $1,000 $0
17 1 12/15/16 12/15/16 12/15/16 Open ……………………. $0 $650 $0 $0
17 1 12/15/16 12/15/16 12/22/16 Closed ……………………. $650 $0 $0 $0

Note, in this example loss payments, ALAE payments, and salvage/subrogation are incremental values while
case reserves are the current balance as of the time of each transaction. A more advanced database may
condense the information by claim as follows.

Claim Accident Report Transaction Claim ….....Claim……. Loss Case Paid Salvage/
Policy Number Date Date Date Status .……Chars……. Payment Reserve ALAE Subrogation
10 1 1/1/16 2/4/16 2/16/16 Closed ……………………. $2,350 $0 $500 $0
10 2 6/15/16 6/16/16 11/9/16 Closed ……………………. $16,000 $0 $2,000 $1,500
15 1 8/5/16 10/15/16 12/9/16 Closed ……………………. $23,000 $0 $2,500 $0
17 1 12/15/16 12/15/16 12/22/16 Closed ……………………. $650 $0 $0 $0

Accounting data

Some data required for ratemaking cannot be allocated in individual policies or even line of business (e.g., CEO
salary). This type of data includes underwriting expenses and unallocated loss adjustment expenses (ULAE).
Underwriting expenses include General expenses, Commissions and Brokerage, Other Acquisition, and Taxes,
Licenses, and Fees (as covered in Chapter 1). ULAE is loss adjustment expenses that cannot be allocated to
individual claims. Each is tracked at the aggregate level, and generally tracked by calendar year. The aggregate
costs may be allocated to line of business or state by approximation if needed.

Data aggregation

Calendar year (CY)

Calendar year data is transactional—data is counted by the year the transaction occurred. CY earned premium
(EP) and earned exposure include all premium and exposure earned during the period, regardless of when the
policy was issued or when it expires. As such, at the end of the CY all premium and exposures are fixed. CY claim
payments include all claims paid in a given year regardless of when the accident occurred, policy was written,
or claim was reported. Like CY premium and exposures, CY losses are also fixed at the end of the period (i.e.,
they do not develop).

The advantages of CY data are that it is readily available and is not subject to further development (since it is
based only on transactions during the year).

Spring 2020 © James Bedford, FCAS Page 125


Basic Ratemaking Chapter 3: Ratemaking Data

The disadvantage of CY data from a ratemaking prospective is there is a mismatch in timing between CY losses
and CY premiums. CY losses can come from policies written in many different years—CY payments in long
tailed lines of business will come from policies written over several decades for example. CY premiums will
only come from policies effective during the period—as such CY premiums will only come from the current and
prior year (ignoring long term policies such as service contracts). This mismatch in timing can be rather large—
the extent of it depends how long tailed the losses are. CY aggregation could be used in ratemaking for lines of
business in which losses are reported and paid relatively quickly (e.g., short tailed lines of business such as
homeowners or auto physical damage).

Accident year (AY)

Accident year premium and exposure are the same as CY premium and exposure (the method is sometimes
called calendar-accident year). AY losses are grouped by the year of claim (accident) occurrence, regardless of
when the claim is reported, policy was written, or payments are made. Unlike CY losses, AY losses will change
(develop) after the year closes. AY premium and exposures will be locked at the end of the year (since they are
the same as CY). Accident year aggregation results in a much better (but not perfect) match between premium
and losses. This is the most common aggregation used in ratemaking.

Policy year (PY)

Policy year aggregation groups premiums, exposures, and losses by the year the policy was written, regardless
of when a claim occurs, is reported, or paid. Neither PY premium and exposure nor losses will be fixed at the
end of the year. Premium and exposure will continue to be earned into the following year since policies from
the year will still be active into the next year (e.g., PY 2016 will include policies written on 1/1/16 which will
expire by the end of the year, but also policies written 12/31/16 which won’t expire until 12 months after
PY2016 ends, assuming annual policies).

PY aggregation’s strength is it results in the best match between premiums and losses. PY aggregation’s
weakness is both premiums and losses take longer to develop.

Report year (RY)

Report year premium and exposure are again the same as CY premium and exposure. RY losses are grouped by
the year they are reported, regardless of when the accident occurred, claim is paid, or when the policy was
written. RY aggregation is commonly used in commercial lines of business written on a claims-made basis
(covered in Chapter 16).

External data

External data is needed when pricing a new line of business, and can also be helpful to supplement internal
data on existing lines of business (e.g., to use as a benchmark, or inform decisions where data may be less
credible). When using external data, we must consider its reasonableness, appropriateness,
comprehensiveness, and other factors (ASOP No. 23).

Statistical plans

U.S. P&C companies are often required to submit statistical data in a consistent format to statistical plans or
industry service organizations. Two very well-known industry service organizations are the National Council
for Compensation Insurance (NCCI) and the Insurance Services Office (ISO). Detailed data is submitted to these
organizations, who then analyze the data and make the results available to member companies.

Spring 2020 © James Bedford, FCAS Page 126


Basic Ratemaking Chapter 3: Ratemaking Data

Other insurance industry aggregated data

Examples include the U.S. personal auto “Fast Track Monitoring System” which is useful to analyze loss trends,
and the Highway Loss Data Institute (HLDI) which is useful to analyze loss information and rate of auto injury
by type of vehicle.

Competitor information

In many jurisdictions company rate filings are available to the public. From this, competitors can attempt to
glean information. The problems that arise are that companies often only file pages of the rate manual that are
changing, making piecing the whole manual together difficult, and companies often create underwriting tiers
to assign risks, but the underwriting rules for these assignments are often not required to be filed.

Even if we can gather complete information on a competitor’s rate filing we must be aware that each company
has a different set of insureds, goals, expenses, and operations which may cause the information to be irrelevant
for our uses.

Other third-party data

This is data not specific to insurance (but useful for ratemaking). Examples include economic and geo-
demographic data. The Consumer Price Index (CPI) (or components of CPI) may be useful in analyzing loss
cost trends (e.g., medical inflation). Geo-demographic data from the U.S. census may be useful in analyzing claim
frequencies (e.g., higher population density resulting in higher auto claim frequency). Other geo-demographic
data includes weather indices, theft indices, and average annual miles drive.

Additional examples include credit data for personal auto, distance to fire station for homeowners, type of soil
for earthquake, and OSHA inspection data for workers compensation.

Spring 2020 © James Bedford, FCAS Page 127


Basic Ratemaking Chapter 4: Exposures

Basic Ratemaking

Chapter 4 – Exposures
Introduction

Exposure is the basic unit measuring a policies exposure to risk. In personal lines of business, the exposure
base used is usually rather simple, such as a vehicle year for personal auto or a house year for homeowners.
Exposure bases in commercial lines of business are often more refined, such as payroll for workers
compensation or square footage for general liability.

Criteria of a good exposure base

A good exposure base should meet these three criteria:

1) Directly proportional to expected loss.


2) Practical (objective and relatively easy to obtain and verify).
3) Consider existing exposure bases used within the industry (historical precedent).

Proportional to expected loss

The exposure base should be directly proportional to expected loss—all else equal we should expect twice the
loss for a policy with two exposures as compared to a policy with one exposure. Rates and expected policy
losses vary by many variables—what we use for the exposure base should be the factor with the most direct
relationship to expected losses.

The chosen exposure base should be responsive to changes in exposure to risk. For example, an auto policy
with two vehicles will be exposed to twice the loss of a policy with a single vehicle (all else equal), and this
difference is captured by the use of car years (one vehicle insured for one year) as the exposure base. Similarly,
an employer with $10M in payroll will be exposed to more workers compensation risk than one with $8M in
payroll (all else equal)—this difference is accounted for by using payroll as the exposure base in workers
compensation.

Practical

The exposure base should be objective (i.e., not subject to judgment) as well as reasonably easy to obtain and
verify. If the exposure base is objective it will not allow for the insured or other parties to manipulate it in their
favor through dishonesty. In addition, we don’t want an exposure base that will be expensive to obtain and
verify—the cost may not be worth the benefit.

Personal auto provides an interesting example of each. Miles driven is more directly proportional to loss in
personal auto than car years, however, historically miles driven would be susceptible to dishonesty by the
policy holder and would be expensive to verify by the insurance company (e.g., sending someone to verify
odometer readings of every vehicle insured). With advances in technology, using miles driven as the exposure
base in personal auto may become more practical in the future—some insurers are already doing so in
commercial auto.

Historical/Industry precedent

When choosing an exposure base we need to consider what has been used in the past and what is used by other
companies. Changing the exposure base currently used can lead to large premium swings, material changes to
our rating algorithms, and require significant changes to historical data for future analysis. Using a different
exposure base than the industry could potentially give a company a competitive advantage (e.g., miles driven
v. car years), but it would also result in a lot of industry data and benchmarks being no longer useful.

It is important to remember that risk characteristics are also captured in rating variables other than the
exposure base. As such, instead of changing exposure bases we could account for additional information

Spring 2020 © James Bedford, FCAS Page 129


Basic Ratemaking Chapter 4: Exposures

through a new rating variable (e.g., adding miles driven as a rating variable rather than changing the exposure
base).

Below are some common exposure bases used in common lines of business. Commercial multi-peril package
policies, such as commercial general liability, often use different exposure bases to price different aspects of
the policy. Additionally, it is common to price commercial policies based on expected exposures, then adjust
the premium after the period when actual exposures are known (e.g., we may not know exact payroll values
for the prospective policy period until it is complete).

Line of Business Typical Exposure Bases


Personal Automobile Earned Car Year
Homeowners Earned House Year
Workers Compensation Payroll (in hundreds)
Commercial General Liability Sales Revenue, Payroll, Square Footage, Number of Units
Commercial Business Property Amount of Insurance Coverage
Physician's Professional Liability Number of Physician Years
Professional Liability Number of Professionals (e.g., Lawyers or Accountants)
Personal Articles Floater Value of Item

Exposure aggregation

As we covered in Chapter 3, there are four common ways to aggregate data: calendar year (CY), accident year
(AY), policy year (PY), and report year (RY). When it comes to premium and exposure aggregation CY, AY, and
RY are all equivalent and are collectively referred to as calendar year (or calendar-accident year) data.

In addition to aggregating exposures by year (either CY or PY) we will look at four basic measures of exposure:
written, earned, unearned, and in-force.

Written exposure is the total exposure of policies written in a given timeframe (e.g., CY or FY). It is important
to note CY written exposures will be credited to the CY in which the transaction takes place—for example an
annual policy written on 7/1/16 will count as 1.00 CY16 written exposure, but if the same policy cancels
1/1/17 it will contribute -0.50 written exposures to CY17 (CY16 written exposure will be unchanged at 1.00).
The preceding example on a PY basis will contribute 0.50 written exposures to PY16 (1.00 initially, and -0.50
once canceled) since all transactions are credited to the year the policy was written.

Earned exposure is the total amount of written exposure for which coverage has been provided as of a given
time. CY earned exposure consists of all exposure earned during the year, regardless of when the policy was
written (e.g., CY16 earned exposure will be made up of policies written in CY15 and CY16). Once a PY is
complete, the PY earned exposure will be equal to the PY written exposure since any cancelations or
amendments will be credited to the PY, effecting both written and the ending earned exposure.

Unearned exposure is the unearned portion of written exposures as of a given time. When applied to a single
policy, unearned exposure is simply written exposure minus earned exposure—this also works for groups of
policies on a PY basis. For groups of policies on a CY basis it is trickier since the policies at the beginning of the
CY are different than at the end (the individual policy, and group of PY policies contain the same policy(s) at
the beginning and ending):

= .@ +[ .− .]
CY beginning Unearned Expo. + CY Change in Unearned Expo.

Spring 2020 © James Bedford, FCAS Page 130


Basic Ratemaking Chapter 4: Exposures

In-force exposure is the amount of exposure on active policies at a given time—this is a snapshot view of the
amount of exposure exposed to loss at a given time. When counting in-force exposures, we count “insured
units”—which may be defined differently by company. A personal auto policy with three insured vehicles may
be counted as one insured unit (the policy) at one company while another company may count that as three
(each vehicle).

To graphically illustrate these concepts I will now introduce a graphic that will be used extensively throughout
the text. In the picture below the x-axis is the date, while the y-axis is the percentage of the policy term expired
(or equivalently, earned). With this outline we can examine individual policies, or blocks of policies. In this
example we are going to look at five individual policies, represented by each diagonal line.
2016 2017
100%
% of Policy Term Expired

50%

0%

1/1/16 1/1/17 1/1/18


The first policy begins on 1/1/16 (blue line) when it is written—as time progresses the percent expired
increases linearly until it reaches 100% on the expiration date on 12/31/2016. Since each policy takes 12
months to reach 100% earned we can infer each is an annual policy (we will see policies of different lengths
later). The entire exposure is earned in CY 2016.

The second policy is written on 4/1/16 (red line) and expires 12 months later on 3/31/17—as such 9/12ths
of the exposure is earned in CY16 while 3/12ths is earned in CY17.

The third policy is written on 7/1/16 (green line) and expires 12 months later on 6/30/17—as such 6/12ths
of the exposure is earned in CY16 while 6/12ths is earned in CY17.

The fourth policy is written on 10/1/16 (purple line) and expires 12 months later on 9/30/17—as such
3/12ths of the exposure is earned in CY16 while 9/12ths is earned in CY17.

The fifth policy is written on 1/1/17 (orange line) and expires 12 months later on 12/31/17—as such it is
fully earned in CY17.

The following table shows the written and earned exposures after the completion of each policy (remember,
PY written exposure (not shown) equals PY earned exposure).

Spring 2020 © James Bedford, FCAS Page 131


Basic Ratemaking Chapter 4: Exposures

Effective Expiration CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017


Policy Date Date Exposure Written. Exp. Written. Exp. Earned. Exp. Earned. Exp. Earned. Exp. Earned. Exp.
1 01/01/16 12/31/16 1.00 1.00 0.00 1.00 0.00 1.00 0.00
2 04/01/16 03/31/17 1.00 1.00 0.00 0.75 0.25 1.00 0.00
3 07/01/16 06/30/17 1.00 1.00 0.00 0.50 0.50 1.00 0.00
4 10/01/16 09/30/17 1.00 1.00 0.00 0.25 0.75 1.00 0.00
5 01/01/17 12/31/17 1.00 0.00 1.00 0.00 1.00 0.00 1.00
Total 5.00 4.00 1.00 2.50 2.50 4.00 1.00

Each of these examples assumes exposures are earned evenly, implying constant probability of loss throughout
the policy, but even earning patterns do not hold for lines of business such as warranty and those effected by
seasonality of losses (e.g., boat owners). For such lines of business, the actuary must specify more appropriate
earning patterns based on historical information.
In-force exposure is a snapshot view of the amount of exposure exposed to loss at a given time. Unearned
exposure is the unearned portion of written exposures as of a given time. Below we will look at the written,
earned, unearned, and in-force exposure at three dates; 6/1/16, 3/1/17, and 10/1/17 (in the prior example
we looked at written and earned exposure by PY and CY after all policies had expired). Graphical we can
imagine each snapshot in time as a vertical line at that date (red lines below).

2016 2017
100%
% of Policy Term Expired

50%

0%

1/1/16 6/1/16 1/1/17 3/1/17 10/1/17 1/1/18

As of 6/1/16 only the first two policies are in-force. Through 6/1/16 policy #1 has been in effect for five
months, so it is 5/12ths earned (all within 2016 on a CY and PY basis) and 7/12ths unearned. Similarly, policy #2
has been in effect for two months, so it is 2/12ths earned (again, all within 2016 on a CY and PY basis) and
10/12ths unearned.

As of 6/1/2016
Unearned In-Force CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Exposure Exposure Written Exp. Written Exp. Earned Exp. Earned Exp. Earned Exp. Earned Exp.
1 0.583 1.000 1.000 0.000 0.417 0.000 0.417 0.000
2 0.833 1.000 1.000 0.000 0.167 0.000 0.167 0.000
3 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
4 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
5 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
Total 4.417 2.000 2.000 0.000 0.583 0.000 0.583 0.000

Spring 2020 © James Bedford, FCAS Page 132


Basic Ratemaking Chapter 4: Exposures

As of 3/1/17 the first policy has expired, but the remaining four are in-force. Policy #1 is fully earned, all within
2016 on a CY and PY basis. Policy #2 is 11/12ths earned—9/12ths in CY16 and 2/12ths in CY17 (all 11/12ths count
toward PY16). Policy #3 is 8/12ths earned—6/12ths in CY16 and 2/12ths in CY17 (all 8/12ths count toward PY16).
Policy #4 is 5/12ths earned—3/12ths in CY16 and 2/12ths in CY17 (all 5/12ths count towards PY16). Policy #5 is
2/12ths earned, all within 2017 on a CY and PY basis.

As of 3/1/2017
Unearned In-Force CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Exposure Exposure Written Exp. Written Exp. Earned Exp. Earned Exp. Earned Exp. Earned Exp.
1 0.000 0.000 1.000 0.000 1.000 0.000 1.000 0.000
2 0.083 1.000 1.000 0.000 0.750 0.167 0.917 0.000
3 0.333 1.000 1.000 0.000 0.500 0.167 0.667 0.000
4 0.583 1.000 1.000 0.000 0.250 0.167 0.417 0.000
5 0.833 1.000 0.000 1.000 0.000 0.167 0.000 0.167
Total 1.833 4.000 4.000 1.000 2.500 0.667 3.000 0.167

As of 10/1/17 the first four policies have expired and are fully earned (policy #4 expiring the day before on
9/30/17). Policy #5 is in-force and 9/12ths earned, all within 2017 on a CY and PY basis.

As of 10/1/2017
Unearned In-Force CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Exposure Exposure Written Exp. Written Exp. Earned Exp. Earned Exp. Earned Exp. Earned Exp.
1 0.000 0.000 1.000 0.000 1.000 0.000 1.000 0.000
2 0.000 0.000 1.000 0.000 0.750 0.250 1.000 0.000
3 0.000 0.000 1.000 0.000 0.500 0.500 1.000 0.000
4 0.000 0.000 1.000 0.000 0.250 0.750 1.000 0.000
5 0.250 1.000 0.000 1.000 0.000 0.750 0.000 0.750
Total 0.250 1.000 4.000 1.000 2.500 2.250 4.000 0.750

As mentioned earlier, when a policy cancels, on a CY basis, the exposure change counts toward the CY in which
the cancelation occurs. For example, if policy #3 cancels on 3/1/17(written on 7/1/16) it would count as one
written exposure in CY16 and -4/12ths written exposure in CY17 (the unearned portion at the time of
cancelation), for a total written exposure of 8/12ths. On a PY basis, each transaction counts towards the PY in
which the policy is written, so in this example policy #3 will count as one written exposure in PY16 up until
3/1/17 (the cancelation date), at which time the PY16 written exposure will be reduced by 4/12ths to 8/12ths.

Policy terms other than annual

Many insurance policies are not annual terms—personal auto policies for example are often written at six-
month terms. Revisiting the example above, assuming semi-annual policies, our graph would look like the
following:

Spring 2020 © James Bedford, FCAS Page 133


Basic Ratemaking Chapter 4: Exposures

2016 2017
100%
% of Policy Term Expired

50%

0%

1/1/16 1/1/17 1/1/18

Each policy reaches 100% earned in six months rather than twelve (i.e., the line slope is twice that of the prior
example with annual policies). Like annual policies, we find the percent earned as the number of months
complete divided the length of the policy. Assuming a full exposure it defined as one car year, a full six-month
policy would only count as half an exposure (this is common, and how the text example is presented). Defining
exposure as one full year of coverage, and assuming semi-annual policies as depicted in the graph above, our
written and earned exposures on a CY and PY basis after all policies expire are shown below.
Effective Expiration CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Date Date Exposure Written. Exp. Written. Exp. Earned. Exp. Earned. Exp. Earned. Exp. Earned. Exp.
1 01/01/16 06/30/16 0.50 0.50 0.00 0.50 0.00 0.50 0.00
2 04/01/16 09/30/16 0.50 0.50 0.00 0.50 0.00 0.50 0.00
3 07/01/16 12/31/16 0.50 0.50 0.00 0.50 0.00 0.50 0.00
4 10/01/16 03/31/17 0.50 0.50 0.00 0.25 0.25 0.50 0.00
5 01/01/17 06/30/17 0.50 0.00 0.50 0.00 0.50 0.00 0.50
Total 2.50 2.00 0.50 1.75 0.75 2.00 0.50

When counting in-force exposures if an “insured unit” is defined as the number of policies insured, we would
have two in-force exposures as of 2/1/17 (policies #4 and #5). If insured unit is instead defined the same as
written exposures we would have one in-force exposure as of 2/1/17 (policies #4 and #5 again, counting as ½
each). Look for the definition in the exam problem, or state your assumption if the question is not explicit.

Calculation for blocks of exposure (“15th of the month” rule)

In all of the examples above we were looking at individual policies. Though computing power now makes it
possible to do the calculates above on large datasets, some companies use exposure data summarized into
blocks (e.g., monthly or quarterly). In such a scenario it is common to assume all policies are written in the
middle of the period (i.e., on the “15th of the month”). This assumption is reasonable if policies are written
uniformly during the period—the longer the period (e.g., yearly) the less likely the assumption will be
reasonable.

As an example, assume we write 400 exposures in each month of 2016, shown in the following table. Since in
this scenario we do not know the effective dates of individual policies we assume each in the block (month
here) is written at the mid-point (15th of month). Assuming each policy is annual, as of 5/1/16 we have 1,600
in-force exposures—those written from January to April— the May policies have not been written when using
the assumption that each is written on 5/15/16. Similarly, as of 1/1/17 all 4,800 exposures are in-force—the
January 2016 exposures do not expire until 1/14/17 since they are annual and assumed to be written on
1/15/16. As 8/1/17 there are 2,000 exposures in-force.

Spring 2020 © James Bedford, FCAS Page 134


Basic Ratemaking Chapter 4: Exposures

Assumed
Written Effective In-Force In-Force In-Force
Month Exposure Date @ 5/1/16 @ 1/1/17 @ 8/1/17
Jan-2016 400 1/15/16 400 400 0
Feb-2016 400 2/15/16 400 400 0
Mar-2016 400 3/15/16 400 400 0
Apr-2016 400 4/15/16 400 400 0
May-2016 400 5/15/16 0 400 0
Jun-2016 400 6/15/16 0 400 0
Jul-2016 400 7/15/16 0 400 0
Aug-2016 400 8/15/16 0 400 400
Sep-2016 400 9/15/16 0 400 400
Oct-2016 400 10/15/16 0 400 400
Nov-2016 400 11/15/16 0 400 400
Dec-2016 400 12/15/16 0 400 400
Total 4,800 1,600 4,800 2,000

Since we assume each policy is written mid-month, ½ of a month is earned in the second half of the first
month, one month is earned in each of the next eleven months, and ½ of a month is earned in the first half of
the 13th month (assuming annual policies). As such, 1/24th is earned in the first month and 1/24th in the 13th
month (and 1/12th in the 11 months in between)—this is called the “24ths method”.

Looking at the January 2016 written exposures, 1/24th will be earned in January 2016, 1/12 will be earned in
each of February to December 2016, and 1/24th will be earned in January 2017. As such, 23/24ths of the
policies written in January 2016 will be earned in CY16 and 1/24th will be earned in CY17. Similar logic is
applied to each month to calculate the CY16 and CY17 earned exposures (of those coming from policies
written in 2016—full CY16 earned exposure will also include policies written in 2015 while full CY17 earned
exposure will include policies also written in 2017).

Assumed
Written Effective Earning Percentage Earned Exposure
Month Exposure Date CY16 CY17 CY16 CY17
Jan-2016 400 1/15/16 23/24 1/24 383.33 16.67
Feb-2016 400 2/15/16 21/24 3/24 350.00 50.00
Mar-2016 400 3/15/16 19/24 5/24 316.67 83.33
Apr-2016 400 4/15/16 17/24 7/24 283.33 116.67
May-2016 400 5/15/16 15/24 9/24 250.00 150.00
Jun-2016 400 6/15/16 13/24 11/24 216.67 183.33
Jul-2016 400 7/15/16 11/24 13/24 183.33 216.67
Aug-2016 400 8/15/16 9/24 15/24 150.00 250.00
Sep-2016 400 9/15/16 7/24 17/24 116.67 283.33
Oct-2016 400 10/15/16 5/24 19/24 83.33 316.67
Nov-2016 400 11/15/16 3/24 21/24 50.00 350.00
Dec-2016 400 12/15/16 1/24 23/24 16.67 383.33
Total 4,800 2,400.00 2,400.00

Exposure trend

As touched on briefly in Chapter 1, and as will be covered in great detail in the following chapters, in ratemaking
we use historical data adjusted to expected future levels to predict costs. In some lines of business exposures
are inflation sensitive (e.g., payroll in workers comp, or sales in general liability). Since we will be adjusting
losses and premiums to future levels, we also want to adjust exposures that are inflation sensitive to future

Spring 2020 © James Bedford, FCAS Page 135


Basic Ratemaking Chapter 4: Exposures

levels as well—for example, we don’t want to be comparing loss and premium information at a 2017 level to
exposure data at a 2012 level.

Non-inflation sensitive exposure bases (e.g., car year, house year, number of units) do not need to be trended.
We can use internal and/or external data to select the trend rate used. The mechanics of exposure trending are
not covered in the chapter, but are very similar to the trending mechanics used in the next chapter on premium.

Spring 2020 © James Bedford, FCAS Page 136


Basic Ratemaking Chapter 5: Premium

Basic Ratemaking

Chapter 5 - Premium
Introduction

As introduced in Chapter 1, the goal of ratemaking is to balance the fundamental insurance equation during the
prospective period in which the rates will be in effect.

Premium = Losses + LAE + UW Expenses + UW Profit


Price Cost Profit

The ratemaking process requires historical data for each of the above components to be restated at the
expected level of the time period the rates will be in effect. This chapter covers the premium component.

We must first adjust premium from historic levels to current levels to account for rate changes between the
historic period and current period. Next, we must develop premium to its ultimate level and trend it to the
prospective period. This results in an estimate of premium in the prospective period at current rate levels—
our indicated rate change will then be the adjustment to this premium needed to balance the equation.

Premium aggregation

This section is nearly identical to the exposure aggregation section in the prior chapter—the main difference
being now we use premium dollars rather than exposures. With exposures our earned, unearned, written, and
in-force values ranged from 0.0 to 1.0—in this chapter our earned, unearned, written, and in-force premiums
will range from 0.0 to the premium amount.

When it comes to premium and exposure aggregation CY, AY, and RY are all equivalent and are collectively
referred to as calendar year (or calendar-accident year) data1.

In addition to aggregating premium by year (either CY or PY) we will look at four basic measures of premium:
written, earned, unearned, and in-force.

Written premium is the total premium of policies written in a given timeframe (e.g., CY or FY). It is important
to note CY written premiums will be credited to the CY in which the transaction takes place—for example an
annual policy written on 7/1/16 with $500 premium will count as $500 CY16 written premium, but if the same
policy cancels 1/1/17 it will contribute -$250 written premium to CY17 (CY16 written premium will be
unchanged at $500). The preceding example on a PY basis will contribute $250 written premium to PY16 ($500
initially, and -$250 once canceled) since all transactions are credited to the year the policy was written.

Earned premium is the total amount of written premium for which coverage has been provided as of a given
time. CY earned premium consists of all premium earned during the year, regardless of when the policy was
written (e.g., CY16 earned premium will be made up of policies written in CY15 and CY16). Once a PY is
complete, the PY earned premium will be equal to the PY written premium since any cancelations or
amendments will be credited to the PY, effecting both written and the ending earned premium.

Unearned premium is the unearned portion of written premium as of a given time. When applied to a single
policy, unearned premium is simply written premium minus earned premium—this also works for groups of
policies on a PY basis. For groups of policies on a CY basis it is trickier since the policies at the beginning of the

1
The text references a situation where premium audits can make AY and CY premiums unequal, but they do not
provide enough detail for it to be testable in my opinion. In addition, pure AY premium only includes premium on
policies with accidents during a certain year which is a useless measure. For all intents and purposes AY premium
simply means CY premium, and is called CY premium.

Spring 2020 © James Bedford, FCAS Page 137


Basic Ratemaking Chapter 5: Premium

CY are different than at the end (the individual policy, and group of PY policies contain the same policy(s) at
the beginning and ending):

= .@ +[ .− .]
CY beginning Unearned Prem. + CY Change in Unearned Prem.

In-force premium is the amount of premium on active policies at a given time—this is a snapshot view of the
amount of premium exposed to loss at a given time. When counting in-force premium, we count the premium
in effect at the given time. For example, if a policy is written for $500 but is amended half way through the
policy resulting in a $600 premium, the in-force premium would be $500 prior to the amendment and $600
after, not the total premium of $550.

The graphical representation of premiums is nearly identical to exposures. The line representing each policy
begins on the bottom at 0% earned on the date it is written and grows to 100% earned at the expiration date,
with the slope of the line depending on the term of the policy. I will quickly repeat some examples from the
prior chapter but use premium rather than exposure. Additionally, I will shade each CY or PY to differentiate
the time periods more clearly.

The first policy begins on 1/1/16 (blue line) when it is written—as time progresses the percent expired
increases linearly until it reaches 100% on the expiration date on 12/31/2016. Since each policy takes 12
months to reach 100% earned we can infer each is an annual policy (we will see policies of different lengths
later). All of the premium is earned in CY 2016.

The second policy is written on 4/1/16 (red line) and expires 12 months later on 3/31/17—as such 9/12ths
of the premium is earned in CY16 while 3/12ths is earned in CY17.

The third policy is written on 7/1/16 (green line) and expires 12 months later on 6/30/17—as such 6/12ths
of the premium is earned in CY16 while 6/12ths is earned in CY17.

The fourth policy is written on 10/1/16 (purple line) and expires 12 months later on 9/30/17—as such
3/12ths of the premium is earned in CY16 while 9/12ths is earned in CY17.

The fifth policy is written on 1/1/17 (orange line) and expires 12 months later on 12/31/17—as such it is
fully earned in CY17.

Spring 2020 © James Bedford, FCAS Page 138


Basic Ratemaking Chapter 5: Premium

The table below shows the written and earned premium after the completion of each policy (remember, PY
written premium (not shown) equals PY earned premium).
Effective Expiration CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Date Date Premium Written. Exp. Written. Exp. Earned. Exp. Earned. Exp. Earned. Exp. Earned. Exp.
1 01/01/16 12/31/16 $500 $500 $0 $500 $0 $500 $0
2 04/01/16 03/31/17 $700 $700 $0 $525 $175 $700 $0
3 07/01/16 06/30/17 $450 $450 $0 $225 $225 $450 $0
4 10/01/16 09/30/17 $900 $900 $0 $225 $675 $900 $0
5 01/01/17 12/31/17 $850 $0 $850 $0 $850 $0 $850
Total $3,400 $2,550 $850 $1,475 $1,925 $2,550 $850

As in the prior chapter, each of these examples assumes premiums are earned evenly, implying constant
probability of loss throughout the policy, but even earning patterns do not hold for lines of business such as
warranty and those effected by seasonality of losses (e.g., boat owners). For such lines of business, the actuary
must specify more appropriate earning patterns based on historical information.
In-force premium is a snapshot view of the amount of premium exposed to loss at a given time. Unearned
premium is the unearned portion of written premium as of a given time. Below we will look at the written,
earned, unearned, and in-force premium at three dates; 6/1/16, 3/1/17, and 10/1/17 (in the prior example
we looked at written and earned premium by PY and CY after all policies had expired). Graphical we can
imagine each snapshot in time as a vertical line at that date (red lines below).

As of 6/1/16 only the first two policies are in-force. Through 6/1/16 policy #1 has been in effect for five
months, so it is 5/12ths earned (all within 2016 on a CY and PY basis) and 7/12ths unearned. Similarly, policy #2
has been in effect for two months, so it is 2/12ths earned (again, all within 2016 on a CY and PY basis) and
10/12ths unearned.

As of 6/1/2016
Unearned In-Force CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Premium Premium Premium Written Prem. Written Prem. Earned Prem. Earned Prem. Earned Prem. Earned Prem.
1 $500.00 $291.67 $500.00 $500.00 $0.00 $208.33 $0.00 $208.33 $0.00
2 $700.00 $583.33 $700.00 $700.00 $0.00 $116.67 $0.00 $116.67 $0.00
3 $450.00 $450.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00
4 $900.00 $900.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00
5 $850.00 $850.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00
Total $3,400.00 $3,075.00 $1,200.00 $1,200.00 $0.00 $325.00 $0.00 $325.00 $0.00

Spring 2020 © James Bedford, FCAS Page 139


Basic Ratemaking Chapter 5: Premium

As of 3/1/17 the first policy has expired, but the remaining four are in-force. Policy #1 is fully earned, all within
2016 on a CY and PY basis. Policy #2 is 11/12ths earned—9/12ths in CY16 and 2/12ths in CY17 (all 11/12ths count
toward PY16). Policy #3 is 8/12ths earned—6/12ths in CY16 and 2/12ths in CY17 (all 8/12ths count toward PY16).
Policy #4 is 5/12ths earned—3/12ths in CY16 and 2/12ths in CY17 (all 5/12ths count towards PY16). Policy #5 is
2/12ths earned, all within 2017 on a CY and PY basis.

As of 3/1/2017
Unearned In-Force CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Premium Premium Premium Written Prem. Written Prem. Earned Prem. Earned Prem. Earned Prem. Earned Prem.
1 $500.00 $0.00 $0.00 $500.00 $0.00 $500.00 $0.00 $500.00 $0.00
2 $700.00 $58.33 $700.00 $700.00 $0.00 $525.00 $116.67 $641.67 $0.00
3 $450.00 $150.00 $450.00 $450.00 $0.00 $225.00 $75.00 $300.00 $0.00
4 $900.00 $525.00 $900.00 $900.00 $0.00 $225.00 $150.00 $375.00 $0.00
5 $850.00 $708.33 $850.00 $0.00 $850.00 $0.00 $141.67 $0.00 $141.67
Total $3,400.00 $1,441.67 $2,900.00 $2,550.00 $850.00 $1,475.00 $483.33 $1,816.67 $141.67

As of 10/1/17 the first four policies have expired and are fully earned (policy #4 expiring the day before on
9/30/17). Policy #5 is in-force and 9/12ths earned, all within 2017 on a CY and PY basis.

As of 10/1/2017
Unearned In-Force CY 2016 CY 2017 CY 2016 CY 2017 PY 2016 PY 2017
Policy Premium Premium Premium Written Prem. Written Prem. Earned Prem. Earned Prem. Earned Prem. Earned Prem.
1 $500.00 $0.00 $0.00 $500.00 $0.00 $500.00 $0.00 $500.00 $0.00
2 $700.00 $0.00 $0.00 $700.00 $0.00 $525.00 $175.00 $700.00 $0.00
3 $450.00 $0.00 $0.00 $450.00 $0.00 $225.00 $225.00 $450.00 $0.00
4 $900.00 $0.00 $0.00 $900.00 $0.00 $225.00 $675.00 $900.00 $0.00
5 $850.00 $212.50 $850.00 $0.00 $850.00 $0.00 $637.50 $0.00 $637.50
Total $3,400.00 $212.50 $850.00 $2,550.00 $850.00 $1,475.00 $1,712.50 $2,550.00 $637.50

As with exposure, when a policy cancels, on a CY basis, the premium change counts toward the CY in which the
cancelation occurs. For example, if policy #3 cancels on 3/1/17 (written on 7/1/16) it would count as $450
written premium in CY16 and -$150 written premium in CY17 (-4/12*$450; the unearned portion at the time
of cancelation), for a total written premium of $300. On a PY basis, each transaction counts towards the PY in
which the policy is written, so in this example policy #3 will count as $450 written premium in PY16 up until
3/1/17 (the cancelation date), at which time the PY16 written premium will be reduces by $150 to $300.

Premium aggregation for policies with non-annual policy terms are handled the same as exposures covered in
Chapter 4, just using premium dollars rather than exposure. I will not repeat that here since we just went
through a repetitive premium example from Chapter 4. One thing to keep in mind with non-annual policies is
the effect on in-force premium—if two companies write the same amount of insurance but one company writes
semi-annual policies while another writes annual, the in-force premium will be twice as much for the company
that writes annual policies (since the in-force premium of an annual policy is twice that of the same policy on a
6-month basis).

Adjustments to premium

The prior sections covered premium aggregation—we will now cover the adjustments that need to be made to
the aggregated premium to restate the historical premium to the level expected for the time period the rates
will be in effect.

We must first adjust historic premium to current rate levels, trend them to the levels expected in the
prospective period, and in some cases develop premium to ultimate levels (most common when using an
incomplete policy year period). The result will be our estimate of premium in the prospective period at our
current rate levels—our indicated rate change will then be the adjustment to these premiums needed to
balance the fundamental insurance equation.

Spring 2020 © James Bedford, FCAS Page 140


Basic Ratemaking Chapter 5: Premium

Premium at current rate level

The first adjustment we make to historical premiums is to restate them at the current rate level. If rates have
increased over time and we do not account for these increases, our indicated rate change will be overstated
since it does not account for the rate increases already taken. There are two standard methods for adjusting
historical rates to current rate levels: extension of exposures and the parallelogram method.

The extension of exposures method (a.k.a. re-rating) re-rates each historic individual policy with the rates
currently in effect, resulting in the historic premiums being restated at the current rate level. This is the method
most commonly used in practice, and is the most accurate. This method requires each current rating
characteristic in the current rating structure to be available for each historic policy. If there have been changes
to the rating structure, especially the addition of rating variables, this information may not be available. In such
a case we must approximate the effect of the unknown variables, or use another method such as the
parallelogram method.

The parallelogram method is applied to groups of policies (e.g., CY or PY) and is less accurate than the extension
of exposures method. The method applies an average factor to each time period to adjust historic premiums to
current levels. Assume we have the following CY earned premiums and historic rate changes (policies are
annual).

Calendar Earned Effective Rate


Year Premium Date Change
2015 478,718 1/1/2015 2.50%
2016 534,021 7/1/2015 5.00%
2017 559,977 3/1/2016 3.00%
9/1/2016 -6.00%
3/1/2017 4.00%

We can visually represent this information in the graph below. Each CY is represented by a grey box, and
each rate change by a blue line. Area 1 is the base level—prior to the first rate change evaluated. Area 2,
between the first and second blue line, is the time period that the first rate change is in effect (first policy
written 1/1/15, last policy expiring 6/31/16). The rate level represented by area 2 effects premium earned in
calendar years 2015 and 2016.

The effect of each subsequent rate change is cumulative (e.g., a 5.0% increase followed by a 10% increase
results in a cumulative 1.05*1.10-1.0 = 15.5% increase). In the table below we calculate the cumulative rate
factor at each rate level (starting with the initial level with base factor 1.0).

Spring 2020 © James Bedford, FCAS Page 141


Basic Ratemaking Chapter 5: Premium

Cumulative
Rate Level Effective Overall Rate
Group Date Rate Change Factor
1 Initial --- 1.0000
2 1/1/2015 2.50% 1.0250
3 7/1/2015 5.00% 1.0763
4 3/1/2016 3.00% 1.1085
5 9/1/2016 -6.00% 1.0420
6 3/1/2017 4.00% 1.0837

Representing the data graphically and calculating the cumulative rate level factors for each area, as done above,
are the first two steps in the parallelogram method. Next, we must calculate the average rate level factor in
effect for each data period.

We start this by finding the area of each rate level, within each data period. Assuming the area of each CY is
1.00, the area in which rate level 1 is in effect within CY15 is 0.50 as the area of a triangle is height x width x 0.5.
Calculating the area of rate level 2 within CY15 directly is inefficient on the exam, so we will instead calculate
the area of rate level 3 within CY15 and use subtraction to find area 2. Rate level 3 goes into effect 7/1/15, so
the triangle has a width of 0.50 (6/12), and since it is an annual policy (slope = 1.0) the height is also 0.50, so
the area is 0.5 x 0.5 x 0.5 = 0.125. Since the total area of CY15 is 1.0, the area of 2 is 1.0 – 0.5 – 0.125 = 0.375.

Now that we know the percentage each rate level contributed to the CY15 earned premium (the areas), we can
calculate the CY15 average rate level as the product of each rate levels area and corresponding rate level factor.
In the following table, we find the CY15 average rate level factor is 1.0189. The current rate level is 1.0837 (the
rate level after the current rates went into effect). As such, we want to increase the average rate level in CY15
to this current level, which we can accomplish by multiplying CY15 earned premium by the target (current)
rate level and dividing by the average rate level of the period. This is called the on-level factors and equals
1.0636 in this example.

On-Level Factor =

Spring 2020 © James Bedford, FCAS Page 142


Basic Ratemaking Chapter 5: Premium

Cumulative
Rate Level Overall Rate Area in
Group Rate Change Factor CY 2015
1 --- 1.0000 0.5000
2 2.50% 1.0250 0.3750
3 5.00% 1.0763 0.1250
4 3.00% 1.1085 -
5 -6.00% 1.0420 -
6 4.00% 1.0837 -
Sum: 1.0000

Wght. Avg. Rate Factor: 1.0189 (a)


Current Rate Factor: 1.0837 (b)
On-Level Factor: 1.0636 (c) = (b) / (a)

We follow the same process to find the on-level rate factors for CY16 and CY17, shown below.

Once we have found the percentage each rate level contributed to each CYs earned premium (the areas), we
find the average rate level for each CY as the product of each rate levels area and corresponding rate level factor.
The on-level factor for each CY is the current rate level factor (1.0837) divided by the average rate level factor
of each CY.

Cumulative
Rate Level Overall Rate Area in:
Group Rate Change Factor CY 2015 CY 2016 CY 2017
1 --- 1.0000 0.5000 - -
2 2.50% 1.0250 0.3750 0.1250 -
3 5.00% 1.0763 0.1250 0.5278 0.0139
4 3.00% 1.1085 - 0.2917 0.2083
5 -6.00% 1.0420 - 0.0556 0.4306
6 4.00% 1.0837 - - 0.3472
1.0000 1.0000 1.0000

Avg. Rate Factor 1.0189 1.0774 1.0708


On-Level Factor 1.0636 1.0059 1.0120

Spring 2020 © James Bedford, FCAS Page 143


Basic Ratemaking Chapter 5: Premium

The on-level earned premium is then the product of the earned premium and the on-level factor for each CY.

Calendar Earned On-Level On-Level


Year Premium Factor Earned Prem.
(1) (2) (3) (4)
2015 478,718 1.0636 509,163
2016 534,021 1.0059 537,167
2017 559,977 1.0120 566,711

(4) = (2) * (3)

If we are on-leveling PY premium, the steps are the same, but the calculation of the areas is much simpler. Since
the lines are paralel, the area of each is simply the width. Once we have the new areas we proceed as before
using PY earned premiums. If policies were six months rather than twelve, the picture would and steps would
be the same, except the slopes of the areas in the picture would be different.

Assuming 6-month policy periods and CY data, the mechanics would be the same, but the calculation of the
areas would be slightly different, as illustrated below:

Once we have calculated the area for each rate level within each time period we can calculate the on-level factor
for each period as done in the following table.

Spring 2020 © James Bedford, FCAS Page 144


Basic Ratemaking Chapter 5: Premium

Rate Level Overall Cumulative Area in:


Group Rate Change Rate Factor CY 2015 CY 2016 CY 2017
1 --- 1.0000 0.2500 - -
2 2.50% 1.0250 0.5000 - -
3 5.00% 1.0763 0.2500 0.5031 -
4 3.00% 1.1085 - 0.3858 0.0278
5 -6.00% 1.0420 - 0.1111 0.3889
6 4.00% 1.0837 - - 0.5833
1.0000 1.0000 1.0000
Avg. Rate Factor 1.0316 1.0849 1.0682
On-Level Factor 1.0505 0.9989 1.0145
Again, the on-level earned premium is then the product of the earned premium and the on-level factor.

Calendar Earned On-Level On-Level


Year Premium Factor Earned Prem.
(1) (2) (3) (4)
2015 478,718 1.0505 502,916
2016 534,021 0.9989 533,431
2017 559,977 1.0145 568,113
(4) = (2) * (3)
As you can see, despite using the same rate changes on the same dates, the on-level earned premium is
slightly different when policies are 6-month rather than annual since the policy term effects the amount of
time each rate level is in effect during each calendar year.

Returning to our CY example with annual policies, assume a law went into effect on 3/1/17 that decreased the
premium on all policies by 5.0%, including those currently in-force (in-force being the key, a law change that
only effects new policies can be treated just like a standard rate change). This change only effects CY 2017 and
forward—prior CYs are unchanged from before. First, we must find the new areas in CY17—the area in rate
level 3 and 6 are unchanged. We can calculate the area of 4b as a triangle, resulting in 0.125. We can find the
area of 4a in CY17 as the prior area of 4 (0.2083), minus the area of 4a, resulting in 0.0833. The area of 5a in
CY17 can be found as 2/12 (width) x 5/12 (avg. height) = 0.0694. The area of 5b is then the area of the old 5
(0.4306), minus 0.0694, equaling 0.3612.

Spring 2020 © James Bedford, FCAS Page 145


Basic Ratemaking Chapter 5: Premium

Now that we have the area of each, we must find the cumulative rate level factors. Everything to the left of the
change is unaffected. The cumulative rate level of 4b is simply the rate level of 4a times 0.95 (1.0 – 5.0%), and
the rate level of 5b is the rate level of 5a times 0.95 as well. The cumulative rate level of 6, after the law change,
is the cumulative rate level of 5b times 1.04 (due to the 4.0% rate increase). This new value for rate level 6 is
the current rate level used in the numerator when finding the on-level rate factors. We can find the CY17
weighted average rate factor as the product of each area and its rate level factor. The areas used in any exam
problem would certainly be less tedious to calculate than this example.

Prior to Law Change (as before) After Law Change


Cumulative Cumulative
Rate Level Overall Rate Rate Level Overall Rate
Group Rate Change Factor Group Rate Change Factor
1 --- 1.0000
2 2.50% 1.0250
3 5.00% 1.0763
4a 3.00% 1.1085 4b -5.00% 1.0531
5a -6.00% 1.0420 5b -5.00% 0.9899
6 4.00% 1.0295

The first major weaknesses of the parallelogram method is that it assumes policies are written evenly through
the year. This may be a reasonable assumption in some cases, though it is not reasonable if the book of business
is growing/shrinking or policies sales are skewed seasonally (e.g., boatowners insurance in the
spring/summer). In situations where policy writings are not uniform we can break the periods down into
smaller timeframes (e.g., quarters or months) to reduce the effect, or calculate the actual distribution of
writings to use in calculating the areas.

The second major weakness of the parallelogram method is that it is applied at the aggregate level. As such, it
may be a reasonable estimate in aggregate, while being a poor estimate by classification, if rate changes vary
by classification. This is a major weakness because it is not appropriate to perform any classification analysis
on such data.

Due to these weakness, coupled with increased computing power, most companies now use the extension of
exposures method. The parallelogram method is really only used in practice if for some reason extension of
exposures is impractical, or for use as a reasonability check.

Premium development

In some situations, the premium we may develop further to its ultimate value. This scenario may happen when
using an incomplete year of data (generally a PY) or in a line of business with premium audits. Some commercial
lines base the initial premium on an estimate of exposure, as the actual amount will not be known until the year
is complete. Once complete, the exposure will be audited and final premium billed.

If we decide to use PY earned premium in our indication rather than CY earned premium, our latest year may
be incomplete (e.g., PY 2017 as of 12/31/2017 is only half compete). We do not know what changes or
cancelations will occur to PY17 after 12/31/17—additionally the remainder of the premium will be earned. To
estimate how PY premium will develop to ultimate we can review historic ratios of development over time (as
in the loss development method).

If there are premium audits, PY premium will develop even further, and beyond 24 months. If for example the
premium audit is complete 6 months after the policy expires, as of 12/31/17 only half of PY16 policies have
undergone their premium audit (those written prior to 6/31/16 would expire prior to 6/31/17 and thus
undergo their audit by 12/31/17). PY16 is at 24 months of maturity as of 12/31/17, and there is additional
development expected.

Spring 2020 © James Bedford, FCAS Page 146


Basic Ratemaking Chapter 5: Premium

The following example shows potential PY development from 12 months (the end of the PY) to ultimate,
including premium audits. At the end of the PY (@12 months) approximately half of the policies in each PY are
earned. At 24 months of development all policies are earned, and half of the policies have undergone their
premium audit (the example assumes premiums increase slightly at audit, which is standard in practice),
therefore there is significant development from 12-24 months—we select an age-age factor of 2.06. At 36
months no further premium has been earned, but the remaining half of the policies have undergone their audit,
causing a slight increase from 24-36. We can then develop each PYs earned premium to ultimate.

Policy Earned Premium


Year 12 24 36 48 60
2013 470,931 974,827 1,014,795 1,014,795 1,014,795
2014 496,361 1,017,540 1,056,207 1,056,207
2015 523,165 1,088,183 1,133,887
2016 551,939 1,136,994
2017 581,445

Policy Age-to-Age Factors


Year 12 - 24 24 - 36 36 - 48 48-60 60-Ult
2013 2.070 1.041 1.000 1.000
2014 2.050 1.038 1.000
2015 2.080 1.042
2016 2.060

Selected 2.060 1.040 1.000 1.000 1.000


Age-Ult 2.142 1.040 1.000 1.000 1.000

Depending on the number of premium audits, and their timing, we could see development beyond 36 months.
If we used an incomplete PY, but policies were not subject to premium audits we would follow the same
procedure, but would not expect any development beyond 24 months. This concept of premium development
does not apply to CY premiums since they are locked at the end of the CY.

Premium trend

In addition to adjusting historical premiums to current rate levels, and developing them to ultimate (if
necessary), we must also trend premiums from their historic levels to the levels expected in the future. The
difference between this adjustment and the rate level adjustment is a bit confusing at first since they are doing
similar things. Once premiums have been adjusted to current rate levels there will often still be other changes
that need to be accounted for via the premium trend. These items include changes in the composition of the
book of business (e.g., moving toward higher cost insureds).

We are going to measure trends in the average premium over time. Pure inflationary changes will affect our
rate level factors (i.e., our rate changes should capture any inflationary changes in losses and other costs) but
not in our premiums directly (e.g., even if inflation is not captured correctly in our rate changes it won’t show
up in our premium trends because premiums only increase if we change our rates, regardless of actual
inflation). The one place inflation does affect our premium trends is if we have an inflation sensitive rating
variable which is indexed to increase automatically with inflation, such as amount of insurance (AOI) purchased
in homeowners (e.g., $350,000).

In addition to any AOI style variable, premium trends will capture changes to the distribution of policies sold.
For example, has there been a trend towards higher or lower cost policies? Has there been a one-time shift in
premium levels due to the company raising deductibles, or due to the purchase or merging of books of business?
These distributional changes are primarily what premium trend will be measuring. As such, we want to
estimate not only how trends have changed in the past, but also how we anticipate them changing in the future.
If we expect the future to be the same as the past we can use one-step trending, but if we expect them to be
different (e.g., due to a one-time change) we can use two-step trending.

Spring 2020 © James Bedford, FCAS Page 147


Basic Ratemaking Chapter 5: Premium

To measure premium trend we use average premium, as total premium would be skewed by any growth (or
shrinking) of the company. We use earned premium in most other places, but in this situation we will generally
want to use written premium to measure premium trend as any trends that show up in written premium will
later show up in earned premium (i.e., it is a leading indicator). Optimally we will want to measure annual trend
by quarter (e.g., 2016 Q3 v. 2015 Q3) to be as responsive as possible, but we can measure yearly trend if
sufficient data is unavailable.

It is important we use on-level premium to measure trend, as if not, our trend selection will pick up the effects
of rate changes and we will double count their effects (once in the trend and again in the on-level adjustment).

Given the following on-level written premium and written exposure by calendar quarter, we would calculate
the average on-level written premium, annual change (trend), and make a selection as follows:

Average
On-level On-level
Written Written Written Annual
Quarter Premium Exposures Premium Change
(1) (2) (3) (4) (5)
2015 Q1 127,786 251 509.11 -
2015 Q2 129,727 254 510.74 -
2015 Q3 131,432 257 511.41 -
2015 Q4 133,299 260 512.69 -
2016 Q1 135,167 263 513.94 0.95%
2016 Q2 137,133 266 515.54 0.94%
2016 Q3 138,958 269 516.57 1.01%
2016 Q4 140,832 272 517.76 0.99%
2017 Q1 142,790 275 519.24 1.03%
2017 Q2 144,709 278 520.54 0.97%
2017 Q3 146,638 281 521.84 1.02%
2017 Q4 148,575 284 523.15 1.04%

Selected: 1.00%
(4) = (2) / (3)
(5) = (4) / (4) Prior Yr -1.0

One-step trending

If we expect future trend to be equal to historical trend, and there haven’t been any large one-time shifts, we
can use one step trending. To do this, we trend our historic on-level premium by our selected trend rate to the
period the rates will be in effect. The trend length is often the trickiest part for students at the beginning. Here
is what the text has to say on the trend period:

“Assuming the ratemaking analysis uses written premium as the basis of the trend selection
and earned premium for the overall rate level indications (which is standard), the trend period
is typically measured as the length of time from the average written date of policies with
premium earned during the historical period to the average written date for policies that will
be in effect during the time the rates will be in effect.”

“Alternatively, some companies determine the trend period as the length of time from the
average date of premium earned in the experience period to the average date of premium
earned in the projected period.”

Spring 2020 © James Bedford, FCAS Page 148


Basic Ratemaking Chapter 5: Premium

The text presents one example using average earned dates and uses average written dates for the rest. Based
on the text, it should be acceptable to trend from the average earned date to the average earned date, or from
the average written date to the average written date. As we will see, dates of each will vary, but the trend
lengths of each are the same. That being said, the sample solutions in recent examiner’s reports have tended to
use average earned dates for trending premium unless we are using two-step trending (covered in the next
section) or we measure the trend using written premium (or are told the given trend was measured using
written premium). So, while I feel using either is entirely valid based on the text (other than when using two-
step trending), the examiner’s reports are sometimes inconsistent, so it is likely safer to use average earned
dates unless we are two-step trending or the trend rate is measured using written premium. Also, calculating
average earned dates is easier in most cases (and aligns with average accident dates we use for trending losses
discussed in the next chapter).

Let’s look at an example where we will use CY 2016 earned premium to estimate rates effective for one year
starting 7/1/18, with annual policies. This is displayed visually in the following graph. The average earned date
of premium earned in CY16 is simply the mid-point of 7/1/16. The average written date of premium earned in
CY16 is a bit trickier. We have to remember CY16 earned premium will include earnings from policies written
1/1/15 (technically 1/2/15 but use round dates for simplicity) through 12/31/16— this is represented by the
dashed green lines (showing when these policies are being earned). The average written date for policies
earned in CY16 can then be seen to be 1/1/16.

The rates will be in effect for one year starting 7/1/18 with annual policies—therefore the prospective policy
period is represented by the parallelogram in the following graph. The average written date is simply the mid-
point of the timeframe the rates are expected to be in effect (equivalently, the effective date plus one half the
effective timeframe). The average written date in this example is then 1/1/19. The average earned date in the
prospective period is halfway between the date the first policy is written and the last policy expires
(equivalently, the effective date plus one half of the policy period and policy term). This average earned date of
the prospective period is then 7/1/19 which is the average of the effective date (7/1/18) and the date the last
policy in the period expires (7/1/20), or equivalently 7/1/18 plus twelve months (one half of the twelve-month
effective period plus twelve-month policies).

Using average earned dates, we would trend from 7/1/16 to 7/1/19, or 3.0 years (represented in blue below).
Using average written dates, we would trend from 1/1/16 to 1/1/19, or 3.0 years (represented in green below).
As you can see, the trend periods are equivalently—which will always be the case (unless the length of policy
terms change between the historic and prospective period).

Using the same logic, the trend length for CY17 premium will be 2.0. Similarly, the CY15 premium trend length
in 4.0 years.

Using our 1.0% premium trend selection from before we can calculate the trended on-level earned premium
(OLEP) for each CY as follows:

Spring 2020 © James Bedford, FCAS Page 149


Basic Ratemaking Chapter 5: Premium

Trended
On-level On-level
Calendar Earned On-Level Earned Trend Trend Earned
Year Premium Factor Premium Length Factor Premium
(1) (2) (3) (4) (5) (6) (7) (3) = Calculated Earlier
2015 478,718 1.0636 509,163 4.00 1.0406 529,837 (4) = (2) * (3)
2016 534,021 1.0059 537,167 3.00 1.0303 553,444 (6) = 1.01^(5)
2017 559,977 1.0120 566,711 2.00 1.0201 578,101 (7) = (4) x (6)

Finding the estimated trended (ultimate) on-level earned premium is the goal of this chapter as it is an integral
input into the overall rate indication. Since we are using CY data, premium development is not needed in this
example.

If instead we wrote 6-month policies, CY16 earned premium would consist of policies written 7/1/15 –
12/31/16 represented by the dashed green lines (showing when these policies are being earned) resulting in
an average written date of 4/1/16. The “trend to” date using average written dates would still be 1/1/19 since
the average written date in the prospective period does not depend on the length of the policies written (only
the length of time the rates will be in effect). This results in a trend length of 2.75 years. Using average earned
date the “trend from” date would be 7/1/16 and “trend to” date is 4/1/19 (7/1/18 effective date plus (12+6)/2
= 9 months)—again resulting in 2.75 years.
2.75 Years

2.75 Years

Similarly, the trend length for CY15 would be 3.75 years and the trend length for CY17 would be 1.75 years.

If the data groupings we are trending are PYs rather than CYs, it is easier to use average written dates since it
is simply the mid-point of the period. The trend to date using average written dates is unchanged (assuming
the length of time the rates will be in effect is unchanged). The average written date of PY15 is 7/1/15, therefore
the trend length for PY15 would be 3.5 years. If so included, we could use average earned dates by trending
from 1/1/16 to 7/1/19 resulting in 3.5 years as well. This is shown graphically in the following illustration.

Using the same logic, the trend length for PY16 would be 2.5 years, and the trend length for PY17 would be 1.5
years. This example assumes annual policies.

Spring 2020 © James Bedford, FCAS Page 150


Basic Ratemaking Chapter 5: Premium

3.5 Years

3.5 Years

The prospective policy period will always be shaped like a policy year since it represents the policies to be
written in the prospective period. The slope of the parallelogram will depend if policies are 6 months or 12
(this will affect the average earned date but not the average written date), while the width will depend on the
timeframe the rates are expected to be in effect (all examples above assumed one year starting 7/1/18, but it
doesn’t have to be one year, and can start on any date).

Two-step trending

Two-step trending is appropriate when there has been a large one-time shift in the distribution of policies sold
or we expect future trends to be different than historic trends. When using two-step trending we must
use average written dates rather than average earned dates (under one-step trending we can use either).

In the case of a large one-time shifts we will first “trend” the historic on-level premiums to the mid-point of the
latest period available in the data. In our example above, we have quarterly data available, so we would first
trend to the mid-point of 2017 Q4, which is 11/15/17. I put “trend” in parentheses because we are not trending
in the traditional sense, but rather adjusting each year’s average premium to the average premium level of the
most recent period available (2017 Q4 in our example). The adjustment factor we use is:

. @
. =
. @

Note, the numerator uses on-level written premium while the denominator uses on-level earned premium. This
is because we are “trending” earned premium to the level of the latest period, which is most accurately
measured using on-level written premium (remember, written premium is a leading indicator of earned
premium).

The next step is to trend from the latest period to the prospective period—hence “two step” trending. This
second step trends in the traditional sense (i.e., (1+i)n) where we use a trend rate based on future expectations.
The trend from date in the 2nd step will be the mid-point of the latest period (11/15/17 in our example) to the
average written date in the prospective period. Assuming the rates will be effective for one year starting 7/1/
18 (as we assumed before) the 2nd step trend length would be 1.125 years (1.5/12ths to 1/1/18, then another
year to the average written date of 1/1/19).

If the historical average premium is volatile, or we wish to use two step trending absent a one-time change, we
can instead choose to trend historic premium to the mid-point of the latest period in the traditional sense (i.e.,
evaluate trend rates, select a trend, and apply (1+i)n to the historic premium) in the first step of the two-step
trending procedure.

Graphically, the trend of CY 2016 on-level earned premium would look like the following. The 1st trend length
is 1.875 years, though the length is not needed if using the ratio of average on-level premiums to adjust to the
current premium levels (if we decide to or are instructed to trend traditionally we do need the length—use the
ratio method unless otherwise instructed on the exam). The 2nd step trend length is 1.125.

Spring 2020 © James Bedford, FCAS Page 151


Basic Ratemaking Chapter 5: Premium

1.875 Years 1.125 Years

We can calculate the trended on-level earned premium using two step trending as follows. The current
premium trend factor for each CY earned premium is the current period average on-level written premium
divided by the historical period average on-level earned premium. Note, the 2nd step trend length for each CY
is the same because each CY has been adjusted to the same point in time during the 1st step (i.e., 11/15/17 in
our example). In this example I assume we expect 3.0% prospective premium trend, so the 2nd step trend factor
is simply 1.031.125. The trended on-level earned premium is then simply the product of the on-level earned
premium, the 1st step (“current premium”) trend factor, and the 2nd step trend factor.

Average Current Trended


On-level On-Level Premium 2nd Step 2nd Step On-level
Calendar Earned Earned Earned Trend Trend Trend Earned
Year Premium Exposure Premium Factor Length Factor (3.0%) Premium
(1) (2) (3) (4) (5) (6) (7) (8)
2015 509,163 1,001 508.65 1.0285 1.125 1.0338 541,381
2016 537,167 1,046 513.54 1.0187 1.125 1.0338 565,719
2017 566,711 1,093 518.49 1.0090 1.125 1.0338 591,139

2017 Q4 Avg. OL-WP: 523.15

(4) = (2) / (3)


(5) = Current Period Avg. OL-WP / (4)
(7) = 1.03^(6)
(8) = (2) x (5) x (7)

Spring 2020 © James Bedford, FCAS Page 152


Basic Ratemaking Chapter 6: Loss and LAE

Basic Ratemaking

Chapter 6 - Loss and LAE


Introduction

As introduced in Chapter 1, the goal of ratemaking is to balance the fundamental insurance equation during the
prospective period in which the rates will be in effect. The ratemaking process requires historical data of each
of the components of the fundamental insurance equation to be restated to the level expected in the time period
the rates will be in effect. This chapter covers the expected loss and LAE component.

Premium = Losses + LAE + UW Expenses + UW Profit

Loss definitions

Paid losses are those that have been paid to claimants. Case reserves are reserves set by the claims professional
for known claims. The sum of paid losses and case reserves are often referred to as reported losses or case
incurred losses (though technically incorrect, in practice they are also often called simply incurred losses). The
actuary estimates the development on known claims and unreported claims, known as IBNR. The sum of
reported claims and IBNR is our estimate of ultimate claims.

Loss aggregation methods

Calendar year loss aggregation includes all transactions that occur within the year regardless of when the
accident occurred, policy was written, or claim was reported. Like CY premium and exposures, CY losses are
also fixed at the end of the period (i.e., they do not develop). CY reported losses are equal to paid losses plus
the change in case reserves during the period:

= + ∆

Accident year (AY) losses are grouped by the year of claim occurrence (accident), regardless of when the claim
is reported, policy was written, or payments are made. Unlike CY losses, AY losses will change (develop) after
the year closes—this development is estimated by the actuary. Accident years are commonly defined as
running 1/1/xx – 12/31/xx, but other definitions are possible, such as 7/1/xx-6/30/xx+1, which we would
refer to as a fiscal accident year.

Policy year (PY) losses are groups by the year in which the policy is written, regardless of when a claim occurs,
is reported, or paid. Claims within a given policy year can occur over a two-year period since the first claim can
occur on the first day a policy is written while the last claim can occur on the last day of the last policy written
(i.e., written on the last day of the year, expiring one year later). PY claims are not locked at the end of the year,
additional claims from a given policy year will continue to be both paid and reported beyond the end of the
year. As such, like AY losses, we will need to develop PY losses to their ultimate values. The main weakness of
PY loss aggregation is the longer timeframe over which losses occur, resulting in a longer timeframe to get an
accurate estimate of ultimate losses.

Report year (RY) losses are aggregated by the year claims are reported, regardless of when the claim occurred,
is paid, or when the policy was written. Since the cohort of claims within a given RY will be locked at the end of
the year there is no IBNR, but there can and will be development on the case reserves of these claims. RY
aggregation is commonly used with claims made policies such as medical malpractice, products liability, or
errors and omissions. Claims made coverages cover claims reported during the policy period, thus is a natural
match with RY aggregation.

The following example with three sample losses should hopefully make these distinctions more clear. Claim
number 1 will be in PY, AY, and RY 2016, but will contribute to CY16 and CY17. Claim number 2 is in PY16, but
AY and RY 2017 since the claim is not incurred and reported until 2017. Claim number 2 also contributes to

Spring 2020 © James Bedford, FCAS Page 153


Basic Ratemaking Chapter 6: Loss and LAE

CY17 and CY18. Claim number 3 is in PY, AY, and RY 2017, and only has payments or changes in case in 2017,
so is also contained to only CY17.

Policy
Claim Effective Date of Report Transaction Incremental Case
Number Date Loss Date Date Payment Reserve*
1 07/01/16 08/15/16 08/15/16 08/15/16 $0 $5,000
09/17/16 $3,000 $3,000
01/14/17 $4,000 $0
2 11/15/16 01/02/17 01/12/17 01/12/17 $0 $25,000
09/30/17 $15,000 $14,000
02/06/18 $12,000 $0
3 01/29/17 05/12/17 05/14/17 05/14/17 $0 $8,000
06/18/17 $8,250 $0
* as of transaction date

Summarizing the data by cohort in the table below, we can see that for example PY16 as of 12/31/16 has $6,000
in reported losses (the sum of claim number 1’s $3,000 in payments and $3,000 in case at the end of the year).
As of 12/31/17, PY16 includes the $7,000 in payments from claim number 1, plus the $15,000 in payments and
$14,000 in case reserves from claim number 2. PY16 as of 12/31/18 includes the same $7,000 from claim
number 1, but now only $27,000 from claim number 2 since the final payments in CY18 were $2,000 less than
the case reserves at the beginning of the year.

CY16 includes only the reported loss transactions during the year (regardless of when the claim was originally
reported), or $6,000 (the sum claim number 1’s $3,000 in payments and $3,000 in case)—this amount does not
change in the future. CY17 includes $1,000 from claim number 1 ($4,000 in payments plus the negative $3,000
in case reserve change), $29,000 from claim 2 ($15,000 in payments plus $14,00 change in case—relative to
the beginning of the period, not the first transaction), and $8,250 from claim 3 for a total of $38,250. Again, this
amount is transactional, so it does not change in the future—any future changes will be included in future CYs.
Finally, we can see there is only one transaction in CY18—it is a payment of $12,000 with $14,000 reduction in
case reserve, resulting in a negative $2,000 in CY18 reported losses (from these 3 sample claims only).

The remaining cohort values at the three valuation dates are shown in the following table—please ensure you
can calculate each.

Reported Losses:
Aggregation Valuation Date
Type Claims 12/31/16 12/31/17 12/31/18
PY 2016 1&2 $6,000 $36,000 $34,000
AY 2016 1 $6,000 $7,000 $7,000
RY 2016 1 $6,000 $7,000 $7,000
CY 2016 -- $6,000 $6,000 $6,000
PY 2017 3 $8,250 $8,250
AY 2017 2&3 $37,250 $35,250
RY 2017 2&3 $37,250 $35,250
CY 2017 -- $38,250 $38,250
CY 2018 -- ($2,000)

Spring 2020 © James Bedford, FCAS Page 154


Basic Ratemaking Chapter 6: Loss and LAE

Common ratios involving losses

Below we will discuss four common loss statistics used in the ratemaking process (also covered in Chapter 1).
It is important to note the definition of each piece of data used (e.g., paid or reported, earned or written, CY, AY,
PY, or RY) as well as the valuation date.

Frequency is the measure of claims per exposure. Generally, the numerator is reported claims and the
denominator is earned exposure, but this may vary based on the need, so it is important to document the types
of claims and exposures used. Changes in frequency provides information on the incidence of claims and
effectiveness of underwriting changes.

#
=
#

Severity is the average claim cost. Severity may be measured in several ways—paid severity (paid
losses/closed claims) and reported severity (reported losses/reported claims) are common examples. ALAE
may or may not be included. As such, it is important to document the types of loss and claims used, as well as
the inclusion or exclusion of ALAE. Changes in severity provides information on loss trends and changes in
claims handling.

=
#
Pure Premium (a.k.a loss cost or burning cost) is the average loss per exposure. Generally, pure premium is
calculated using reported (or ultimate) losses and earned premium, and may or may not include ALAE.

=
#

Pure premium is also equal to frequency times severity:

#
= = =
# # #

Changes in pure premium provides information on trends in overall loss costs, which may be due to either, or
a combination of, frequency and severity.

Loss Ratio is the ratio of loss to premium. As with the other ratios, the definition of the values in the numerator
and denominator may vary by situation, so it is important to document and understand each. Loss ratio also
may or may not include ALAE and ULAE. The most common definition uses reported (or ultimate) losses and
earned premium. The loss ratio is also equal to the pure premium divided by the average premium.

= =

Adjustments to losses

The goal of ratemaking is to estimate costs during the prospective policy period. We find this expected value
by adjusting historic data to the prospective level. If our historic data includes losses that are not representative
of the expected value we should remove these losses and replace them with their expected values. The two
most common situations this occurs is in the presence of individual shock losses and catastrophic losses.

Individual shock losses

Shock losses are extraordinarily large individual claims. Examples of shock losses might include a multi-
claimant liability claim or permanent total disability of a young worker. Shock losses are generally not an issue
for large insurers since any given loss is such a tiny piece of total losses it doesn’t affect the aggregate, but shock

Spring 2020 © James Bedford, FCAS Page 155


Basic Ratemaking Chapter 6: Loss and LAE

losses can be an issue for smaller insurers or smaller lines of business. Shock losses are expected to happen
from time to time, but we do not want their presence (or absence) to skew our rate indication, so we essentially
smooth out the random fluctuations by removing the amount of actual loss over a certain level, and replacing
it with the expected value of losses over that same level (e.g., remove losses excess $1M, but replace them with
the expected losses excess $1M).

The threshold we use to cap large losses is dependent on our desire to minimize the volatility of rates due to
large losses (e.g., larger books of business will have larger thresholds) and the desire to include as many actual
losses as possible. The provision we add back in represents the expected value of the losses in the layer
removed, and will be based on data over a longer time period.

Below is a sample excess loss factor calculation. We are given Cols (1) – (3), the total reported ground-up losses,
the number of losses excess our threshold, and the ground-up reported losses on those individual claims that
breach our threshold. From this, we can calculate the amount of loss excess the threshold, shown in Col (4), as
the ground-up losses of those excess the threshold minus the first $2M of each claim. The non-excess losses in
Col (5) are then the total reported losses minus the excess losses. Our excess ratio is then the ratio of excess
losses to non-excess losses, shown in Col (6). This ratio is volatile year over year as we would expect, but we
smooth this out by selecting a long term weighted average (all year’s excess losses divided by all year’s non-
excess losses in this example). Our excess loss factor is then 1.0 plus the excess ratio, or 1.030. The excess loss
factor is then applied to the non-excess losses in each of the historical periods—as shown in Col (8).

Smoothed
Accident Reported # Excess Ground-Up Losses Excess Non-Excess Excess Reported
Year Losses Claims Excess Losses $2,000,000 Losses Ratio Losses
(1) (2) (3) (4) (5) (6) (8)
2003 $257,175,295 9 $21,287,403 $3,287,403 $253,887,892 1.29% $261,514,863
2004 $262,104,202 11 $38,468,540 $16,468,540 $245,635,662 6.70% $253,014,730
2005 $261,064,846 12 $27,030,900 $3,030,900 $258,033,946 1.17% $265,785,467
2006 $269,170,683 12 $27,399,012 $3,399,012 $265,771,671 1.28% $273,755,639
2007 $276,290,533 6 $21,508,860 $9,508,860 $266,781,673 3.56% $274,795,982
2008 $283,876,393 9 $26,088,534 $8,088,534 $275,787,859 2.93% $284,072,720
2009 $295,300,926 14 $33,344,010 $5,344,010 $289,956,916 1.84% $298,667,426
2010 $291,753,982 12 $42,831,756 $18,831,756 $272,922,226 6.90% $281,121,002
2011 $295,612,303 5 $16,726,495 $6,726,495 $288,885,808 2.33% $297,564,141
2012 $309,851,761 13 $33,881,211 $7,881,211 $301,970,550 2.61% $311,041,958
2013 $309,553,250 13 $37,329,864 $11,329,864 $298,223,386 3.80% $307,182,226
2014 $314,221,597 8 $19,565,336 $3,565,336 $310,656,261 1.15% $319,988,594
2015 $323,205,757 6 $20,984,166 $8,984,166 $314,221,591 2.86% $323,661,029
2016 $329,624,009 4 $13,240,432 $5,240,432 $324,383,577 1.62% $334,128,288
2017 $345,071,778 9 $35,334,000 $17,334,000 $327,737,778 5.29% $337,583,251
Total $4,423,877,315 143 $415,020,519 $129,020,519 $4,294,856,796 3.00% $4,423,877,315
(7) Excess Loss Factor: 1.0300
(3) Ground-up losses of those individual claims above $2M
(4) = (3) - (2) * $2M
(5) = (1) - (4)
(6) = (4) / (5)
(7) = 1.0 + (6) Total
(8) = (5) * (7)

As we will see later, trend has a greater effect on higher layers of loss than lower layers, so this procedure
should be performed on losses that have been trended to the prospective period (alternatively, we could de-
trend the limit applied to each year). Another option simply mentioned in the text is to fit losses to a distribution
and simulate claims experience in order to calculate the expected excess losses.

Spring 2020 © James Bedford, FCAS Page 156


Basic Ratemaking Chapter 6: Loss and LAE

Catastrophe losses

Catastrophe (CAT) losses are man-made or natural disasters that result in a large number of claims, whose total
loss is very large. Examples include hurricanes, tornadoes, hail storms, earthquakes, explosions, oil spills, etc.
As we did with shock losses, we want to remove CAT losses, above our chosen threshold (the two thresholds
do not need to be the same), and replace them with the expected CAT losses above that threshold.

Non-modeled catastrophes are CATs that we want to smooth, but that occur regularly enough that we can
estimate them using long term averages (e.g., 20-30 years of hail experience, trended to current levels). So, for
example, our rate indication may be based on the last 5 years of experience, but we may decide to measure the
average non-modeled CAT losses over 30 years instead of 5. These losses will generally be measured relative
to non-CAT losses, so we will apply the selected ratio to non-CAT projected losses. Alternatively, we can develop
a pure premium for non-modeled CAT losses separately (as done in appendix B). Appendix B uses Amount of
Insurance Years (AIY) as the exposure base since it is inflation sensitive (otherwise the ratio would increase
overtime due to inflation). AIY is the sum of the total amount of insurance for all policies in-force during the
calendar year.

Modeled catastrophes are events that are so low frequency and high severity that using a long-term average
is inappropriate. CAT models are sophisticated stochastic models built by professionals from various
disciplines (e.g., meteorologists, statisticians, insurance professionals) that estimate the frequency and severity
of the modeled CATs effect on an insurer’s book of business. The modeled CAT provision is then added to the
non-modeled CAT and non-CAT losses to determine the expected aggregate losses. The actual CAT losses will
be lower than the provision in most years, but significantly higher in years the given CAT actually occurs.

Additionally, companies use non-pricing actions to control their CAT risk and minimize the effect of any one
event. Examples include placing restrictions on new business or requiring higher deductibles in risky CAT
areas, or purchasing reinsurance to cover CAT prone areas. Companies may also charge a higher profit and
contingency provision in CAT prone areas to compensate for the additional risk undertaken.

Reinsurance costs

Reinsurance is purchased by primary insurers to manage the risk they retain on their books. Reinsurance
contracts can either be proportional (e.g., reinsurer covers 25% of losses in exchange for 25% of the premiums)
or non-proportional (e.g., the reinsurer covers 50% of the first $10M excess of $25M total loss). Since the same
amount of premium and loss are ceded (transferred to the reinsurer), proportional reinsurance may not
necessarily need to be explicitly accounted for in the rate indication. Non-proportional reinsurance on the other
hand will. To account for non-proportional reinsurance in our rate indication we can reduce the expected losses
by the expected amount of losses ceded to the reinsures, and account for the costs of the reinsurance by
reducing our premiums by the premium ceded. Alternatively, we could account for the net cost of reinsurance
(ceded premium – expected ceded losses) as an expense item in the overall rate.

Changes in coverage or benefit levels

As we are estimating losses in the prospective period, we need to adjust all aspects of the historical losses to
this expected level. This certainly includes changes in coverage or benefit levels. For example, if a company
expands or contracts the amount of coverage on certain losses, or law changes/court rulings effect benefit
levels, we will need to adjust the historic losses (which are at historic coverage and benefit levels) to the
coverage and benefit levels expected in the prospective period.

The following simple example calculates the effects of a reduction in loss limit from $10,000 to $5,000 (i.e., a
reduction in coverage). Changes in coverage or benefit level can also have an indirect effect on insured
behavior, which can be difficult to measure, but should be considered to the extent possible by the actuary (e.g.,
if a sublimit is reduced, the insured may purchase a policy floater, further reducing losses if the sublimit is
secondary to the floater).

Spring 2020 © James Bedford, FCAS Page 157


Basic Ratemaking Chapter 6: Loss and LAE

Losses Losses
Claim Capped Capped Effect of
Number at $10,000 at $5,000 Change
(1) (2) (3)
1 $3,250 $3,250 0%
2 $4,900 $4,900 0%
3 $5,875 $5,000 -15%
4 $7,500 $5,000 -33%
5 $10,000 $5,000 -50%
Total $31,525 $23,150 -27%

(1) = Given
(2) = Min[(1), $5,000]
(3) = (2) / (1) - 1.0

Workers compensation benefits are set by state law, changes in these laws, or court interpretation of these
laws, can cause both direct and indirect changes to expected losses. For example, increasing benefit levels will
lead to a quantifiable increase in expected loss, but the increase may also make some covered workers more
likely to file a claim or to stay out of work longer when injured.

Let’s look at an example that quantifies the effect of a law change on workers compensation benefits with the
following assumptions:

· The statewide average weekly wage (SAWW) is currently $1,500.


· The benefit compensation rate is 75.0% of the worker’s pre-injury wage.
· The current minimum indemnity benefit is 56.25% of SAWW, and current maximum indemnity benefit
is 131.24% of SAWW.
· The minimum indemnity benefit is increasing to 75.0% of SAWW while the maximum indemnity
benefit is decreasing to 100.0% of SAWW.
· The distribution of workers and their wages (relative to SAWW) are given in the following table:

Ratio to Number Total


Average of Weekly
Weekly Wage Workers Wages
(1) (2) (3)
<50% 14 $8,750
50-75% 48 $45,216
75-100% 54 $71,280
100-133.3% 38 $67,450
133.3-175% 24 $55,800
>175% 22 $64,350
Total 200 $312,846

In order to calculate the benefit levels before and after the change we must calculate the minimum and
maximum benefits in dollars, as well as the wages (relative to SAWW) where the minimum and maximum
apply. From the information given, we know the minimum and maximum benefits relative to SAWW—shown
in Col (1) in the following table. From that we can calculate the minimum and maximum benefit dollars, Col
(2), as the ratio of the benefit to SAWW multiplied by SAWW. We also need to know at what wages these
minimum and maximums will apply. In dollars this is the benefit divided by the compensation rate, shown in
Col (3). For the wage at which the limits apply, relative to SAWW, we divide the dollar wage by SAWW, as
done in Col (4). Now we know what the current and proposed minimum and maximum indemnity benefits
are, as well as the wage levels they will apply.

Spring 2020 © James Bedford, FCAS Page 158


Basic Ratemaking Chapter 6: Loss and LAE

Benefit/ Corresponding Corresponding Wage/


SAWW Benefit Wage SAWW
(1) (2) (3) (4)
Current Min. 0.5625 $843.75 $1,125.00 75.00%
Current Max 1.3125 $1,968.75 $2,625.00 175.00%
Proposed Min. 0.7500 $1,125.00 $1,500.00 100.00%
Proposed Max 1.0000 $1,500.00 $2,000.00 133.33%

(1) = Given
(2) = (1) * SAWW
(3) = (2) / Compensation Rate
(4) = (3) / SAWW

Now that we know the minimum and maximum at the current and proposed levels, and to what wage ranges
they apply we can calculate the total indemnity benefits for the current and proposed levels. The current
minimum applies to anyone making 75.0% or less of SAWW, so we apply the current minimum benefit to those
ranges in Col (4) below. Similarly, the current maximum applies to anyone making 175.0% or more of SAWW,
so we apply the current maximum benefit to the top range in Col (4). The total current benefits for those ranges
is then the minimum or maximum benefit multiplied by the number of workers in each range. Between those
two values there is no minimum or maximum applied, so the total benefit is simply the total weekly wage in
Col (3) multiplied by the compensation rate (75% in this example). We apply the same logic using the proposed
minimum and maximums to find the proposed total benefits. The benefit change is then the proposed total
benefit divided by the current total benefit, or a 3.55% increase in our example.

Ratio to Number Total Current Current Proposed Proposed


Average of Weekly Minimum/ Total Minimum/ Total
Weekly Wage Workers Wages Maximum Benefits Maximum Benefits
(1) (2) (3) (4) (5) (6) (7)
<50% 14 $8,750 $843.75 $11,813 $1,125.00 $15,750
50-75% 48 $45,216 $843.75 $40,500 $1,125.00 $54,000
75-100% 54 $71,280 -- $53,460 $1,125.00 $60,750
100-133.3% 38 $67,450 -- $50,588 -- $50,588
133.3-175% 24 $55,800 -- $41,850 $1,500.00 $36,000
>175% 22 $64,350 $1,968.75 $43,313 $1,500.00 $33,000
Total 200 $312,846 $241,523 $250,088

(4) = Current Benefit Floor and Ceiling in Range Benefit Change: 3.55%
(5) = (2) * (4); else (3) * Compensation Rate
(6) = Proposed Benefit Floor and Ceiling in Range
(7) = (2) * (6); else (3) * Compensation Rate

Appendix D gives the example of Medical Benefit level adjustments. The example finds the medical benefit level
change as the Medical Fee Schedule Change * % of medical losses subject to the Fee Schedules + Other Medical
Level Change * % of medical losses not subject to the Fee Schedules.

Once we have estimated the effect of the benefit changes we must apply it to historical losses to adjust them to
future levels. The mechanics to do this are nearly identical to the parallelogram method with benefit level
changes akin to rate changes. Remember, the parallelogram method assumes a uniform distribution within
each segment (AY, CY, etc.), which may not be appropriate if losses are affected by seasonality, which is common
in loss data. As such, when adjusting losses for benefit level changes we will generally use quarterly losses to
minimize this effect.

Spring 2020 © James Bedford, FCAS Page 159


Basic Ratemaking Chapter 6: Loss and LAE

Using similar logic as when on-leveling premiums, the adjustment factor to adjust losses to the current benefit
level is the current loss level divided by the average loss level of the historical period.

𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝐿𝑜𝑠𝑠 𝐿𝑒𝑣𝑒𝑙


𝐵𝑒𝑛𝑒𝑓𝑖𝑡 𝐿𝑒𝑣𝑒𝑙 𝐴𝑑𝑗𝑢𝑠𝑚𝑒𝑛𝑡 =
𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐿𝑜𝑠𝑠 𝐿𝑒𝑣𝑒𝑙 𝑖𝑛 𝐻𝑖𝑠𝑡𝑜𝑟𝑖𝑐𝑎𝑙 𝑃𝑒𝑟𝑖𝑜𝑑

In this example we will assume there was only one benefit level change during our experience period and it
had a +10.0% effect on benefits. Therefore, prior to the change the loss level is 1.00 while after the change the
loss level is 1.10 (also the current loss level). Next, we will proceed exactly as we would when using the
parallelogram method with premiums, except here we are using quarterly loss data (annual data is still
sometimes used on the exam).

Assuming we are evaluating accident quarter loss data and the benefit level change effects losses on policies
that went into effect after the change was implemented, our graph would look like the following. We already
know the loss levels before and after the change, so we must now calculate the amount each level contributes
to each accident period (represented by the areas in each quarter). Looking at AY 16 Q3 we can calculate the
area after the change within the period as the average of the heights 6/12 and 3/12 multiplied by the width
3/12, equaling 0.09375. Since we know the total area in the quarter is 0.25 we can simply find the area prior
to the change in benefit level as 0.25 – 0.09375 = 0.15625. The average loss level in AY Q3 is then:
. .
𝐴𝑌16 𝑄3 𝐿𝑜𝑠𝑠 𝐿𝑒𝑣𝑒𝑙 = 1.00 ∗ + 1.10 ∗ = 1.0375
. .

The Benefit Level Adjustment Factor for AY 16 Q3 is then the current factor of 1.1 divided by 1.0375 (the
average factor in the period) = 1.0602. This means AY16 Q3 losses must be multiplied by a factor of 1.0602 to
bring them up to the current benefit level. Other quarters would be calculated similarly.

If instead we had the same +10.0% change, but the change effected all losses going forward regardless of
when the policy was written (e.g., court case of new law), the change would be represented by a vertical line.
This actually makes calculating the areas easier when evaluating accident quarter losses. In the following
graph we assume the change went into effect on 6/15/16. The average rate level for AY16 Q1 would be 1.0, so
the adjustment factor would be 1.10 for that period. For AY 16 Q3 and subsequent the change would be fully
in effect, so the adjustment factor would be 1.0. For AY 16 Q2 one half of the area is before the change while
one half is after the change, so the average rate level is 1.05, resulting in an adjustment factor of 1.0476
(1.10/1.05).

Spring 2020 © James Bedford, FCAS Page 160


Basic Ratemaking Chapter 6: Loss and LAE

If we are evaluating policy years the mechanics are the same, the shapes are just different. The following
graph depicts a benefit level increase that effects policies written after the change. We would calculate the
areas, average loss levels, and benefit level adjustment factors as before.

Assuming we are evaluating policy years and there is a benefit level change that effects all losses after it goes
into effect, the geometry is slightly trickier, but the steps are the same as before. An example is shown visually
in the following graph.

Spring 2020 © James Bedford, FCAS Page 161


Basic Ratemaking Chapter 6: Loss and LAE

Companies can supplement their own analysis of benefit level changes with NCCI estimates of benefit level
changes at the state level.

Loss development

Our rate indication needs to be based on ultimate claim amounts (rather than simply reported or paid levels).
As such we need to develop losses to their ultimate value. This topic alone makes up the other half of the
syllabus. The ratemaking text simply covers the chain ladder (development) method as an example. Please do
not take this to mean you can simply use the chain ladder method for ratemaking purposes (except for on the
exam, unless otherwise asked), you should understand what is happening in your data and use the appropriate
methods to accurately estimate ultimate claims for ratemaking purposes as well. As this is covered extensively
in the reserving text, and there is nothing new in the ratemaking text, I will not cover this here.

Loss trend

After developing losses to their ultimate level, we also need to trend them to the prospective period. As an
example, the ultimate loss estimate for AY 2016 is our estimate of ultimate losses on claims occurring in 2016—
for the rate indication we want an estimate of ultimate losses occurring in the prospective period. As such, we
need to trend the AY 2016 ultimate loss estimate (losses occurring in 2016) to the level of losses occurring in
the prospective period.

Loss trend will account for both the change in costs (e.g., monetary inflation (+), increasing medical costs (+),
and improved safety (-)) and in distributional changes in the book of business (e.g., the effect on loss trend will
be positive if the book of business has moved toward higher risk insureds). To better understand the underlying
causes of loss trend we often want to analyze frequency trend and severity trend separately—total loss (pure
premium) trend would then be [(1+frequency trend) x (1+severity trend)-1.0].

Historically some actuaries have argued that developing and trending historical losses are overlapping
adjustments. This is incorrect and called the overlap fallacy. As we have seen, loss development simply takes
losses to their ultimate value (at historic levels) while loss trend projects losses from historic levels to
prospective levels, therefore the adjustments do not overlap.

Loss trend selection

The number of years used, and the type of loss (e.g., AY v CY, Paid v. Reported) is based on actuarial judgement.
For very short tailed and stable lines of business, such as auto physical damage, we may use CY paid losses (CY
losses are not subject to development) since they are readily available and paid losses are not subject to
changes in case reserving practices. For long tailed volatile lines of business we may want to examine AY
ultimate losses adjusted for benefit level changes.

When choosing the number of years to use we need to balance responsiveness (using recent years) with
stability (using more years). We also need to be aware of the cyclical nature of insurance, the random noise
present in the data, as well as considering seasonality, catastrophe/shock losses, and changes in benefit levels
when making our selections.

We can measure trend using linear or exponential trend. Linear trend results in a fixed change per period (e.g.,
+$50/yr) and has the weakness that it can go below zero if negative. Exponential trend is more traditional and
is expressed as a percent change per period (e.g., +3.0%/yr).

The following is an example trend calculation using quarterly date (or more specifically, 12-month periods
ending each successive quarter). We will compare trend rates for frequency, severity, and pure premium year
over year. For example, the first observed frequency change of -2.6% is comparing the frequency of the 12-
month period ending March 2015 to the 12-month period ending March 2014. The exponential trend fit over
various periods is shown for each. Generally, we will select a frequency and severity trend, then find the pure
premium trend as the product of 1.0 plus each, minus 1.0. We calculate the pure premium trend below to ensure
our selected pure premium trend found by combining our frequency and severity trends is reasonable, though

Spring 2020 © James Bedford, FCAS Page 162


Basic Ratemaking Chapter 6: Loss and LAE

there are may be situations where we may want to select the pure premium trend directly. Using the data below
we may select a -1.5% frequency trend, a +4.5% severity trend, resulting in a 2.9% pure premium trend, which
certainly seems reasonable given the observed pure premium trends.

12-Month Closed
Period Earned Claim Paid Annual % Annual % Pure Annual %
Ending Exposure Counts Losses Frequency Change Severity Change Premium Change
(1) (2) (3) (4) (5) (6) (7) (8) (9)
Mar-2014 151,245 7,633 $12,954,739 0.0505 $1,697.20 $85.65
Jun-2014 151,996 7,569 $13,090,656 0.0498 $1,729.51 $86.13
Sep-2014 152,750 7,616 $13,066,999 0.0499 $1,715.73 $85.55
Dec-2014 153,508 7,596 $13,379,143 0.0495 $1,761.34 $87.16
Mar-2015 154,270 7,585 $13,372,432 0.0492 -2.6% $1,763.01 3.9% $86.68 1.2%
Jun-2015 155,036 7,663 $13,654,021 0.0494 -0.7% $1,781.81 3.0% $88.07 2.3%
Sep-2015 155,805 7,720 $13,989,264 0.0495 -0.6% $1,812.08 5.6% $89.79 5.0%
Dec-2015 156,578 7,638 $14,054,911 0.0488 -1.4% $1,840.13 4.5% $89.76 3.0%
Mar-2016 157,355 7,723 $14,087,364 0.0491 -0.2% $1,824.08 3.5% $89.53 3.3%
Jun-2016 158,136 7,757 $14,481,620 0.0491 -0.8% $1,866.91 4.8% $91.58 4.0%
Sep-2016 158,921 7,774 $14,591,491 0.0489 -1.3% $1,876.96 3.6% $91.82 2.3%
Dec-2016 159,710 7,845 $14,812,304 0.0491 0.7% $1,888.12 2.6% $92.75 3.3%
Mar-2017 160,503 7,860 $14,770,449 0.0490 -0.2% $1,879.19 3.0% $92.03 2.8%
Jun-2017 161,300 7,728 $15,213,655 0.0479 -2.3% $1,968.64 5.4% $94.32 3.0%
Sep-2017 162,101 7,776 $15,282,396 0.0480 -1.9% $1,965.33 4.7% $94.28 2.7%
Dec-2017 162,905 7,865 $15,648,491 0.0483 -1.7% $1,989.64 5.4% $96.06 3.6%
Mar-2018 163,713 7,861 $15,566,159 0.0480 -1.9% $1,980.18 5.4% $95.08 3.3%
Jun-2018 164,525 7,810 $15,883,244 0.0475 -0.9% $2,033.71 3.3% $96.54 2.4%

Column Notes: Number Frequency Severity Pure Premium


(4) = (2) / (1) of Points Exponential Fit Exponential Fit Exponential Fit
(5) = (4) / (4) Prior Year 18 Point 18 -1.1% 4.2% 3.0%
(6) = (3) / (2) 12 Point 12 -1.3% 4.2% 2.9%
(7) = (6) / (6) Prior Year 8 Point 8 -1.7% 4.7% 2.9%
(8) = (3) / (1) 6 Point 6 -1.6% 5.0% 3.3%
(9) = (8) / (8) Prior Year 4 Point 4 -1.5% 4.0% 2.5%
Selected -1.5% 4.5% 2.9%
(1.0-0.015)*(1+0.045) -1.0 = 2.9%

When selecting loss trend rates we will want to use losses excluding catastrophe/shock losses and adjusted to
current benefit levels. If CATs are included in the data they will distort our trend analysis (overstating trends
when entering the data and understating when in the back half). If we cannot remove CATs we will want to
judgmentally account for them when making our trend selection.

Similarly, if changes to benefit levels are not adjusted for in our data they will distort trend rates (i.e., the trend
will also pick up the change in benefit level). Ideally, we want our data to be adjusted to current benefit levels,
and our trend selections based on this adjusted data. If we select trend rates based on data unadjusted for
benefit level changes, but apply them to data adjusted for benefit level changes we will double count for benefit
changes since both our trend selection and the data that it is applied to will account for the change in benefit
level (similarly we will not account for changes at all if selecting trend based on adjusted data but apply to
unadjusted data).

Trend selections can also be supplemented with multi-state, countrywide, or industry insurance data, or non-
insurance indices (e.g., CPI or average wage levels).

Spring 2020 © James Bedford, FCAS Page 163


Basic Ratemaking Chapter 6: Loss and LAE

When using calendar year loss data to measure trend, as we did above, the underlying assumption is that the
book is not significantly growing or shrinking. That is because the losses paid in a given period will come from
losses incurred over a long period—for example losses paid in the 12 months proceeding December 2015 could
potential include payments on claims incurred decades ago for long tailed lines of business. The earned
exposures in the same period will come from policies written over the prior 24 months (i.e., a policy written 24
months ago will be the boarder for policies effecting the last 12-month calendar year). If the size of the book of
business is steady, or losses are very short tailed, this mismatch is immaterial, but if the book is growing or
shrinking significantly there can be big issues with the prior calculation. For example, if the book is shrinking,
the exposures in older prior periods producing many of the losses now being paid will be larger than the
exposures in the last 24 months that we are using as the denominator in our calculation (and vice versa when
growing).

If the book of business is rapidly growing or shrinking we could use econometric techniques or GLMs to
measure trend. Alternatively, we could use accident year data to provide a better match, but if using accident
year data we must develop losses to ultimate before measuring trend, thus introducing some subjectivity into
the calculation. Another alternative is to split each calendar year’s payment by accident year and match them
to the exposures that produced them. For example, the CY16 pure premium would be the CY16 losses from
AY16 divided by CY16 exposures plus CY16 losses from AY15 divided by CY15 exposures plus CY16 losses from
AY14 divided by CY14 exposures, and so on.

Loss trend period

Now that we have a selected trend rate we must apply it to our historical data. This process is very similar to
applying premium trend to historic premium. When trending historic premium we trended from either the
average written date of the historic period to the average written date of the prospective period or the average
earned date of the historic period to the average earned date of the prospective period. When trending historic
losses we want to trend from the average loss occurrence date in the historic period to the average occurrence
date in the prospective period. The average loss dates correspond to the average earned dates. The average
occurrence date within the historic period will vary depending on the type of data we are using in the historic
period (accident year, or policy year) while the average occurrence date of the prospective period will depend
on the length of time the rates are expected to be effective and the term of the policy.

For historical loss data on an accident year basis, the average occurrence date is simply the mid-point of the
period since we assume accidents occur uniformly throughout the period (i.e., the average accident date of
accidents that occurred in the period is assumed to be the mid-point). So, for example, the average occurrence
date of AY 2016 is 7/1/16.

For historical data on a policy year basis, the average accident date is assumed to be the average of the date the
first policy is written and the last policy expires, since we again assume accidents occur uniformly throughout
the period, or equivalently the beginning of the period plus one half the effective period plus the policy term.

The average accident date of the prospective period (which will be shaped like a policy year, though it is not
necessarily a 12-month period and will start on the effective date of the rate revision) is assumed to be the
average of the date the first policy is written and the last policy expires or equivalently the beginning of the
period plus one half the effective period plus the policy term (again, assuming accidents occur uniformly
throughout the period).

Assuming we are using accident year data, annual policies, and our prospective rates will go into effect 7/1/18
with the expectation they will be active for 12 months, our trend period for AY16 is represented in the following
graph. The trend from date is the mid-point of the accident year, or 7/1/16. The trend to date is the average
accident date of the prospective period, which is the average of the first written and last expiring policy in the
period. This is 7/1/19 in this example, using 12-month policies. Equivalently, the trend to date is the beginning
of the policy period plus one half of the expected effective period and the policy term—which would be 7/1/18
plus (12+12)/2 = 12 months in this example, which is 7/1/19 as before.

Spring 2020 © James Bedford, FCAS Page 164


Basic Ratemaking Chapter 6: Loss and LAE

If instead we have 6-month policies, and everything else is the same, we have a slightly shorter trend period
because the average accident date of the prospective period is earlier since policies are not in-force as long. In
this case the trend to date would be 4/1/19 which is the average of the first written date of 7/1/18 and the
date the last policy expires, 12/31/19. The length of the policies does not affect the trend from date when using
accident year data because the cohort of claims is still all claims occurring in the year, so the mid-point would
still be the average accident date (it would affect the average written date, but we are not interested in that
when looking at losses). Equivalently, the trend to date is the beginning of the policy period plus one half of the
expected effective period and the policy term—that would be 7/1/18 plus (12+6)/2 = 9 months in this example,
which is 4/1/19 as before.

If instead of using accident year losses we were using policy year losses the trend period for PY16 would look
like the following graph, assuming annual policies. With the same effective date and assumption that the new
rates will be in effect for 12-months the prospective policy period would look the same as the first example,
having the same 7/1/19 average accident date (trend to date). The average accident date of PY16 will be the
average of the first written (1/1/16) and last expiring dates (12/31/17), or 1/1/17—also equal to the start
date (1/1/16) plus one half of the effective period (12 months) and policy term (12 months).

Spring 2020 © James Bedford, FCAS Page 165


Basic Ratemaking Chapter 6: Loss and LAE

Using the same dates, but instead assuming policies are 6-months, the trend period would look like the
following graph. The average accident date in PY16, assuming 6-month policies, would be the average of the
first written (1/1/16) and last expiring (7/1/17) dates, which is 10/1/16. The average accident date in the
prospective period, assuming a 7/1/18 effective date, 12-month effective period, and 6-month policies would
be the average of the first written date (7/1/18) and last expiring date (12/31/19), or 4/1/19. The trend period
would then be 2.5 years which is the same length of time found assuming 12-month policies. This is because
the average accident date in each period is 0.25 years earlier when using 6-month policies than 12-month
policies. Both trend dates can also be found as the beginning of the period plus one half the effective period and
policy term.

As with premium trend, there are situations where it is appropriate to use two step trending with losses.
Examples include legislative or macroeconomic changes that cause trend rates to change, or significant sudden
changes in the book of business. The two step trending mechanics are similar for losses as premiums (using
traditional trending in the first step)—we first trend to the mid-point of the latest data point in the trend data
using the selected trend rate, then trend to the prospective period using the prospective trend rate. The latest
data in the text example is the 12 months ending in Q4 (rather than a true quarter as in the premium example),
so we trend to 7/1. If the data had been true quarterly data (like the premium example), we would have trended
to 11/15. In this example the latest block of data is AY17, so when two-step trending AY16 we would first trend
1.0 year from the average accident date of AY16 to the average accident date of AY17, using the selected historic
trend rate. We would then trend the remaining 2.0 year to the average accident date of the prospective period
(assuming a 7/1/18 effective date, 12-month effective period, and 12-month policies) using the selected
prospective trend rate.

To the extent losses to not occur uniformly throughout the period we will want to measure this effect and
change our average accident dates accordingly.

It is also important to realize that calendar year loss data does not have a standard assumed average accident
date like accident year or policy year loss data. This is because the losses paid or incurred during a CY will come
from many prior accident and policy years—the extent of the lag depends on how long tailed the line of business
being examined is. For long tailed lines of business CY losses can come from accidents that occurred decades
ago, while CY losses in short tailed will generally come from recent accident dates.

Spring 2020 © James Bedford, FCAS Page 166


Basic Ratemaking Chapter 6: Loss and LAE

Leveraging effect of limits on severity trend

If the losses we are evaluating are subject to a limit we must be aware of the effects trend has on various layers
of loss. If for example, severity trend is 5.0% overall (on unlimited ground up losses) and losses are limited, the
loss trend in this limited layer will be less than 5.0% while the trend in the excess layer will be greater than
5.0%. This is because only the increase from trend which is below the limit effects the primary layer, while the
entire increase from trend above the limit goes into the excess layer, in addition to the increase from the
primary layer that has now pierced into the excess layer. Therefore, if we select trend based on unlimited losses,
but want to apply it to limited losses we must first adjust it down. Alternatively, we can select trend rates based
on limited data.

The following example illustrates this leveraging effect of limits on severity trend, using a 10% trend rate and
a $25,000 limit. We are given 6 sample claims along with their loss amounts in Col (1). From that we can
calculate the losses capped at the limit, and losses excess the limit. In Col (4) we calculate the trended losses—
applied to ground-up losses since trend effects the full amount of loss. Again, we can calculate the losses capped
at the limit, and losses excess the limit after trend. In Cols (7) – (9) we calculate the actual trend experienced
for ground-up losses, limited losses, and excess losses. As you can see, the ground up losses experience the 10%
overall trend, but the limited losses experience a much lower trend while the excess losses experience a much
higher trend. This is because much of the experienced trend breached the limited layer into the excess layer—
of the $2,450 trend increase to claim number 3 only $500 went to the primary layer while $1,950 went to the
excess layer, while the entire trend amount for claims 4-6 went to the excess layer. Only claims 1, 2, and a little
bit of 3 contributed to the trend in the primary (limited) layer.

Trended Trended Trend in Layer


Total Losses Total Losses Trended Total Losses
Claim Limits Capped at Excess Limits Capped at Excess Limits Capped at Excess
Number Loss $25,000 Losses Loss $25,000 Losses Loss $25,000 Losses
(1) (2) (3) (4) (5) (6) (7) (8) (9)
1 $10,000 $10,000 $0 $11,000 $11,000 $0 10.0% 10.0% --
2 $15,000 $15,000 $0 $16,500 $16,500 $0 10.0% 10.0% --
3 $24,500 $24,500 $0 $26,950 $25,000 $1,950 10.0% 2.0% ∞
4 $28,000 $25,000 $3,000 $30,800 $25,000 $5,800 10.0% 0.0% 93.3%
5 $32,500 $25,000 $7,500 $35,750 $25,000 $10,750 10.0% 0.0% 43.3%
6 $60,000 $25,000 $35,000 $66,000 $25,000 $41,000 10.0% 0.0% 17.1%
Total $170,000 $124,500 $45,500 $187,000 $127,500 $59,500 10.0% 2.4% 30.8%

Column Notes:
(1) = Given (6) = (4) - (5)
(2) = Min[(1), $25,000] (7) = (4) / (1) - 1.0
(3) = (1) - (2) (8) = (5) / (2) - 1.0
(4) = (1) *1.10 (9) = (6) / (3) - 1.0
(5) = Min[(4), $25,000]

Above we assumed severity trend was positive— the opposite is true if severity trend is negative (Primary
layer trend ≥ Unlimited trend ≥ Excess trend)

Deductibles have a similar effect on severity trend, but note that deductibles eliminate the “primary” layer, so
we are left with the “excess” layer that will incur higher trend rates than the unlimited trend (assuming positive
trend rates).

Spring 2020 © James Bedford, FCAS Page 167


Basic Ratemaking Chapter 6: Loss and LAE

Coordinating exposure, premium, and loss trends

It is important to make sure all components of the rate indication are trended consistently—this is generally
rather simple as we trend each to the prospective period. Where this becomes slightly tricky is when exposures
are inflation sensitive (e.g., payroll for workers compensation or sales for general liability). When exposures
are inflation sensitive they can distort our measure of frequency trends (e.g., an actuary will not necessarily get
injured on the job more often today than 10 years ago simply because the average wage is higher now than 10
years ago). If exposures are inflation sensitive we want to measure frequency relative to exposures that have
been trended to current levels.

Occasionally, rather than evaluating premium, exposure, and loss trend separately, the actuary will examine
the net loss ratio (or pure premium) trend. Net loss ratio trend measures the trend in adjusted losses
(developed to ultimate adjusted for benefit level changes and extraordinary losses but not trended) to adjusted
premium (on-level but not trended). This trend selection will capture the net effect of loss trend over premium
trend. The weakness of this method is the ratios will often be volatile and we lose the understanding of the
underlying drivers of change (frequency v. severity v. premium).

Loss adjustment expenses (LAE)

Claims related expenses (loss adjustment expenses) are the sum of allocated loss adjustment expense (ALAE)
and unallocated loss adjustment expense (ULAE). ALAE includes expenses that can be allocated to individual
claims, such as legal fees paid to outside counsel hired to defend a specific claim. ULAE includes expenses that
cannot be allocated to individual claims, such as the salaries of claims department personnel.

Unfortunately, the definitions of what specifically is included in ALAE and ULAE can, and does, vary by
company. In financial reporting, there are hard definitions for expense allocation—LAE is categorized into
either Defense and Cost Containment (DCC) or Adjusting and Other (A&O), which have uniform accounting
standards for what is included in each. Very roughly speaking, ALAE and DCC are similar, while ULAE and A&O
are similar.

ALAE may be developed to ultimate together with loss, or separately (generally separately in commercial lines).
The ratemaking text does not go in depth to the estimating of ultimate ALAE and ULAE, but the methods from
the reserving text apply here the same they did when estimating ultimate losses. As with ultimate losses, we
also want to trend ultimate ALAE and ULAE to the prospective period (if we estimate a dollar amount, if
estimating as a percentage of loss make sure to apply it to the prospective losses).

The ultimate amounts of LAE will be included with ultimate losses in our rate indication.

Spring 2020 © James Bedford, FCAS Page 168


Basic Ratemaking Chapter 7: Other Expenses and Profit

Basic Ratemaking

Chapter 7 – Other Expenses and Profit


Introduction

As introduced in Chapter 1, the goal of ratemaking is to balance the fundamental insurance equation during the
prospective period in which the rates will be in effect. The ratemaking process requires the historical data of
each of the components of the fundamental insurance equation to be restated to the level expected during the
time period the rates will be in effect. This chapter covers the underwriting expense and underwriting profit
component.

Premium = Losses + LAE + UW Expenses + UW Profit

Specifically, this chapter covers:

· How to estimate projected underwriting expenses


· How to account for the costs of reinsurance
· How to incorporate underwriting profit in the rates

Simple Example

Using the notation defined previously, and values adjusted to the prospective period, we can solve for the
average premium needed to balance the fundamental insurance equation during the period the new rates will
be in effect:

= + + +

= + + + ∗ + ∗

−( + ) = + +

(1 − − )= + +

+ +
=
(1 − − )

( + + )/
/ =
(1 − − )

+ +
=
(1 − − )

Congratulations, we have just derived the pure premium rate indication formula that we will see in the next
chapter! The above formulation assumes there are both fixed dollar amount ( ) and variable percentage of
premium (V) underwriting expenses. Assuming we are told the average expected loss and LAE ( + ) is $250,
the fixed UW expense ( ) is $25, the variable UW expense ( ) is 15%, and the target profit provision ( ) is
5%, we can calculate the indicated average premium in the prospective period as:

+ + $250 + $25
= = = $343.75
(1 − + ) (1 − .15 − .05)

This chapter will focus on determining the fixed UW expense provision ( ), the variable UW expense provision
(V), and the target profit provision ( —the T subscript is for “target”).

Spring 2020 © James Bedford, FCAS Page 169


Basic Ratemaking Chapter 7: Other Expenses and Profit

Underwriting expense categories

Commissions and Brokerage: are the amounts paid to agents or brokers as compensation for generating
business. They are generally a percentage of premium and often vary by new/renewal, the volume of business
generates and/or the quality of the business written.

Other Acquisition: are the costs of acquiring business (other than commissions and brokerage fees) including
media advertising, mailings, and salaried sales employees (non-commissioned).

Taxes, Licenses, and Fees: includes all taxes and miscellaneous fees paid by the insurer (excluding federal
income tax). Examples include premium taxes and licensing fees.

General: are the remaining costs of the insurance operations and other miscellaneous costs. General expenses
include items such as the home office and personnel salaries.

Further, we will generally want to divide expenses between those that vary with the amount of premium
(variable expenses) and those that do not (fixed expenses). As such, fixed expenses will be measured per
exposure and variable expenses will be measured per premium dollar. Typically, a portion of both General
expenses and Other Acquisition are considered fixed expenses, as well as the full amount of License and Fees.
Taxes and Commissions and Brokerage are considered variable expenses.

We can measure these fixed and variable expenses using state specific date or countrywide data for the line of
business being examined. Taxes, Licenses, and Fees will generally vary by state, so we will want to use state
specific expenses in these categories. General and Other Acquisition are often assumed to be uniform across all
locations, so we often use countrywide data. If a company’s Commission and Brokerage fees vary by state we
should use statewide data, otherwise we can use countrywide data (this is company specific). The Insurance
Expense Exhibit (IEE) contained in the Annual Statement includes a significant amount of expense information
but is often not granular enough to use for most lines of business rate indications (e.g., homeowners, renters,
and mobile home data is combined). In such situations we want to get more specific data, but must also consider
the time and cost associated with obtaining more granular expense data.

Total expenses, and expenses by category, can vary significantly by line of business (e.g., commercial v. personal
LOBs) and market channel (direct v. agency).

Expenses will generally be measured against premiums (or exposures) on a calendar year basis. If the expense
is incurred at the beginning of the policy period we want to measure it relative to written premium as written
premium reflects the premium at the beginning of the policy. Likewise, if the expense is incurred throughout
the life of the policy period we want to measure it relative to earned premium as earned premium reflects the
premium earned throughout the period.

Commissions and Brokerage, Other Acquisition, and Taxes, Licenses, and Fees are incurred when the policy is
written, therefore they are measured against written premium (or exposure). General expenses are assumed
to be incurred evenly throughout the policy, thus are measured against earned premium (or exposure). If the
company’s volume is steady, the choice of written or earned premium isn’t very material since they will be
approximately equal. If the line of business is growing (or shrinking) it is important to use an appropriate
premium measure to best match the type of expense.

The follow table summarizes the discussion thus far in this section:

Expense Category Type Data Used Measured Against


Commissions and brokerage Variable CW or State Written Prem/Exposure
Other acquisition Mix of Variable & Fixed Countrywide Written Prem/Exposure
Tax=Variable;
Taxes, licenses, and fees Statewide Written Prem/Exposure
Licenses & Fees = Fixed
General Mix of Variable & Fixed Countrywide Earned Prem/Exposure

Spring 2020 © James Bedford, FCAS Page 170


Basic Ratemaking Chapter 7: Other Expenses and Profit

Expense projection methods

In this section we will examine three methods for measuring and projecting prospective expenses. These
methods are:

· All variable expense method


· Premium based projection method
· Exposure/Policy based projection method

All variable expense method

Historically actuaries have treated all expenses as variable (i.e., a constant percentage of premium) and
assumed prospective expense ratios would be equal to historic ratios. Many commercial lines of business still
use this method since expenses are dominated by variable expenses, so differentiating between fixed and
variable is immaterial.

To calculate expense ratios, we look at calendar year expenses divided by calendar year premium. In this
example we are looking at Commission and Brokerage expenses, so we use state specific data, and written
premium. The type of calendar year data (state v. countrywide, written v. earned) used will depend on the
expense category being evaluated (see previous table).

CY CY CY 3-Yr Wght.
Commission and Brokerage 2014 2015 2016 Average Selected
a State Expenses $251,819 $264,410 $274,630
b State Written Premium $2,098,712 $2,253,648 $2,250,830
c Expense Ratio[(a)/(b)] 12.0% 11.7% 12.2% 12.0% 12.0%

Note:
Blue = Given
Red = Selected

Since we are projecting prospective expenses, we want to select the ratio we expect to occur in the future. We
will take into account management input, judgement, and knowledge of any changes occurring in the book of
business (e.g., change in commission structure, gains in productivity), so our selection may differ from the
historic ratio.

For all three expense projection methods we will discuss, judgement must be used when deciding whether to
include of exclude extraordinary one-time expenses. We must decide if the expense should be charged for in
rates, and if so, should we smooth it over several years. Excluding the expense from the rate calculation
essentially funds the expense through current surplus. There are also state laws restricting items that can be
included in the expense calculation (e.g., charitable contributions, lobbying expenses).

We repeat this process for each of the other expense categories (i.e., General, Other Acquisition, and Taxes,
Licenses and Fees) then sum up the selected ratios to find the total expense ratio.

Since this method assumes all expenses are variable, and there are certainly some fixed expenses in reality, this
method will overstate the premium needed for large risks and understate the premium needed for small risks
(since charged expenses will vary directly with premium). Companies can account for this shortcoming by
implementing a premium discounting structure that reduces the expense loads based on amount of premium
(common in workers compensation and covered more in Chapter 11).

Spring 2020 © James Bedford, FCAS Page 171


Basic Ratemaking Chapter 7: Other Expenses and Profit

Premium based projection method

The premium based projection method is a simple extension of the all variable expense method which allows
us to capture the fixed and variable expenses separately so they can be handled more appropriately in the rate
indication. The first steps are identical to the all variable expense method, but we then allocate a portion to the
selected expense ratio to fixed expenses and a portion to variable, rather than assuming it is all variable.

In this example we will estimate the fixed and variable expense ratios for the General expenses. Remember, the
type of data we use will depend on the type of expense. Since General expenses are assumed to be incurred
throughout the life of the policy, we will measure it against earned premium (rather than written). Also, General
expenses are commonly assumed to be uniform by state, so we use countrywide data to find the ratios. Once
we have our data set up, the first three steps (a-c) are identical to the all variable expense method. The
enhancement to this method is that we split this selected expense ratio into fixed and variable buckets. The
split should be based on actual expense data, or informed judgement with input from management and other
departments. In this example we assume 75% of General expenses are fixed.

Once we have our assumed % fixed, we find the fixed expense ratio as the product of the selected expense ratio
and the % fixed. Similarly, the variable expense ratio is the selected expense ratio multiplied by one minus the
% fixed.

CY CY CY 3-Yr Wght.
General Expenses 2014 2015 2016 Average Selected
a Countrywide Expenses $50,587,317 $51,328,768 $53,231,642
b Countrywide Earned Premium $758,444,712 $782,125,252 $786,463,847
c Expense Ratio[(a)/(b)] 6.7% 6.6% 6.8% 6.7% 6.7%
d % Assumed Fixed 75.0%
e Fixed Expense % [(c )x(d)] 5.0%
f Variable Expense % [(c )x(1.0-(d))] 1.7%

Again, we repeat this process for each of the other expense category (i.e., Commission and Brokerage, Other
Acquisition, and Taxes, Licenses and Fees) then sum up the selected ratios to find the total ratio. A full example
of this method can be seen in the expense ratio exhibit in Appendix A—I would highly recommend reviewing
that tab in the Excel files at this time.

If we need fixed expense dollars (rather than %) per exposure, as we do when using the pure premium rate
indication formula, we can multiply the total fixed expense ratio (summed across each expense categories) by
the projected average premium, resulting in a dollar amount of fixed expenses per exposure.

A weakness of this method is that it measures fixed expense dollars relative to premium. Actual fixed dollars
per exposure do not vary with premium, but this method uses a selected percentage of premium to estimate
this dollar amount, so if prospective premium levels are materially different than historical periods this
method’s estimate of fixed expense dollars in the prospective period will be inaccurate. Prospective rates may
be materially different than historical rates due to recent rate changes (the above premiums are not on-level,
though we could choose to instead used on-level premium), or changes in premium levels (i.e., shifting toward
higher or lower cost insureds).

An additional weakness of this method is if we use countrywide data, as we did above, we implicitly assume
the ratio of fixed expense to premium is the same by state, which likely isn’t true. Higher average premium level
states will be allocated more fixed costs per policy than lower average premium level states when they should
be the same. Each of these will lead to inaccurate estimates of prospective fixed expenses, distorting our rate
indication. Rather than making extensive adjustments to account for the shortcomings of estimating fixed
expenses using this method, we can estimate fixed expenses per exposure (or number of policies) as we will in
the next method.

Spring 2020 © James Bedford, FCAS Page 172


Basic Ratemaking Chapter 7: Other Expenses and Profit

Exposure/Policy based projection method

In this method, instead of using the selected % fixed to allocate the selected expense ratio as we did in the
previous method, we use the selected % fixed to first allocate expense dollars between fixed expense dollars
and variable expense dollars. Fixed expense dollars are then measured relative to exposures (or policies) while
variable expense dollars are measured relative to premium (as before). If fixed expenses are assumed to be
constant for each exposure then we use exposures, while if fixed expenses are assumed to be constant for each
policy we then use policies (e.g., the fixed costs of writing an auto policy may be the same regardless if there is
one exposure or six). Below we use exposures, but the mechanicals are identical if we were to use policies
instead.

Below we will examine the Other Acquisition expense. Again, the type of data we use will depend on the
category of expense. Since Other Acquisition expenses are assumed to be incurred at the beginning of the policy,
we will measure it against written premium (rather than earned). Also, Other Acquisition expenses are
commonly assumed to be uniform by state, so we use countrywide data.

As stated above, this method first allocates expense dollars between fixed and variable amounts (rows c and f
respectively). Fixed dollars (c) are measured against written exposures (d) to find the fixed expense dollars per
exposure (e). Variable expenses are measured as a percentage of premium (same as in the prior methods)—
this calculation is shown in rows f-h below.

CY CY CY 3-Yr Wght.
Other Acquisition 2014 2015 2016 Average Selected
a Countrywide Expenses $4,483,223 $4,655,664 $5,038,930
b % Assumed Fixed 75.0%
c Fixed Expense $ [(a)x(b)] $3,362,417 $3,491,748 $3,779,198
d Countrywide Written Exposures 92,123 94,285 93,238
e Fixed Expense Per Exposure [(c)/(d)] $36.50 $37.03 $40.53 $38.02 $38.02
f Variable Expense $ [(a)x(1.0-(b))] $1,120,806 $1,163,916 $1,259,733
g Countrywide Written Premium $84,483,286 $87,450,346 $88,310,767
h Variable Expense % [(f)/(g)] 1.3% 1.3% 1.4% 1.4% 1.4%

As always, we repeat this process for each expense category (i.e., Commission and Brokerage, General
Expenses, and Taxes, Licenses and Fees) then sum up the selected fix dollars and variable ratios separately. A
full example of this method can be seen in the first expense ratio exhibit in Appendix B—I would highly
recommend reviewing that tab in the Excel files at this time (the trend piece is discussed next).

If we need fixed expenses expressed as a percentage of premium (as we will in the loss ratio indication method
introduced in the next chapter) we can divide the expected fixed expense per exposure (as calculated above)
by the prospective average premium.

Three weaknesses of this method are that we still must judgmentally allocate expense dollar between fixed and
variable dollars, when using countrywide data we allocate to state based on the calculated ratio while the actual
ratio may vary by state, and some expenses will be considered fixed but will vary by characteristics not
captured (e.g., new v. renewal).

Expense Trend

In the examples above we used historical unadjusted/untrended data, so how does expense trend come into
play? We will expect expense dollars to increase with inflation, but using the first two methods and the variable
piece of the third method, we measure expense dollars as a constant percentage of premium, so our projected
expenses will automatically change as our premiums change, thus no trend is needed (this assumes expenses
change at the same rate as premiums).

Fixed expenses using the third method (exposure/policy based projection method) result in a dollar amount.
If the fixed dollar amount is relative to an inflation sensitive exposure base, we do not need to trend (this

Spring 2020 © James Bedford, FCAS Page 173


Basic Ratemaking Chapter 7: Other Expenses and Profit

assumes expenses trend at the same rate as the exposure base). If the exposure base is not inflation sensitive,
or we measure fixed expenses relative to policies, we will want to trend the fixed expense dollars into the
prospective period.

We can measure trend using internal expense data or using governmental agency data (e.g., CPI or other
indexes). We want to trend from the mid-point of the historical CY to either the average written date of the
prospective period (if we assume the fixed expenses are incurred when the policy is written) or to the average
earned date of the prospective period (if we assume the fixed expenses are incurred throughout the policy
period). In practice, and in Appendix B, we will often be trending fixed expenses that are a blend of both, so we
will make a simplifying assumption that expenses are incurred at the beginning of the period and trend to the
average written date. In Appendix B we simply trend from the mid-point of the historical CYs to the average
written date of the prospective period, resulting in round trend periods. For enhanced precision we could trend
each expense type separately, with trend length depending on when we expect the fixed expenses to be
incurred.

There is a good example of both the fixed expense trend selection and trend length in Appendix B. I highly
recommend you review that at this time. The expense trend selection is made as the Employment Cost Index *
% of expense used for Salaries and Employment Relations & Welfare + Consumer Price Index (All Items) * (1.0
- % of expense used for Salaries and Employment Relations & Welfare).

Reinsurance costs

As discussed in Chapter 6, reinsurance is purchased by primary insurers to manage the risk they retain on their
books. In proportional reinsurance contracts the same amount of premium and loss are ceded (transferred to
the reinsurer), so we may not need to explicitly account for proportional reinsurance in our rate indication.

Non-proportional reinsurance on the other hand will need to be accounted for. We can do this by reducing the
expected losses by the expected amount of losses ceded to the reinsures and reduce premiums by the amount
ceded. Alternatively, we could account for the net cost of reinsurance (ceded premium – expected ceded losses)
as an expense item in the overall rate indication.

Underwriting profit provision

Insurance companies undertake risk when they write insurance policies and they must hold capital to support
this risk. As such, they are entitled to earn a reasonable rate of return on this capital. Insurers earn profit in two
ways: investment income and underwriting profit.

Investment income

There are two main source of investment income: earnings on invested capital, and earnings on invested
policyholder supplied funds.

Capital belongs to the owners of the insurance company (a.k.a. equity or policy holder surplus). These funds
are invested and earn investment income—whether these returns should be reflected in the ratemaking
process is subject to much disagreement (i.e., should policy holders be given credit for earnings on money that
isn’t theirs?).

Policyholder supplied funds include unearned premiums and loss reserves. Until premiums are earned they
still belong to the policyholders, but they can be invested by the insurance company to earn investment income.
Additionally, the insurer holds loss reserves (case + IBNR) that belong to the policyholders, but can also be
invested until they are paid out at a future date. For short tailed lines of business investment income on loss
reserves may be very small (due to the small timeframe), but for long tailed lines of business the investment
income on loss reserves may be large (due to the long timeframe they are invested prior to being paid out).

The projection of expected investment income is beyond the scope of the text and exam.

Spring 2020 © James Bedford, FCAS Page 174


Basic Ratemaking Chapter 7: Other Expenses and Profit

Underwriting profit

Underwriting profit is the profits generated from policies (excluding investment income). Re-arranging the
fundamental insurance equation we can see that underwriting profits are equal to premiums minus loss, LAE,
and underwriting expenses.

UW Profit = Premium − Losses − LAE − UW Expenses

Actual profit at the time a policy is sold is not know since we do not know what the actual loss, LAE, and
expenses will be at the time of sale.

Underwriting profit plus investment income represent the total profit earned by the insurance company.
Generally, the actuary will determine the underwriting profit needed to achieve the total profit goals after
investment income (e.g., if we want to earn 10% overall and expect 6% investment income we should aim to
earn 4% underwriting profit). Since investment income is larger for long tailed lines, we can accept a lower
underwriting profit (and in some cases underwriting loss) than we can in short tailed lines of business while
still hitting out overall profit target.

Permissible loss ratios

There are two permissible loss ratios; the variable permissible loss ratio and the total permissible loss ratio.

The variable permissible loss ratio (VPLR) is the percentage of each premium dollar that is intended to pay for
projected loss, LAE, and fixed expenses (with the remaining portion intended to pay for variable expenses and
profit). Said another way, it is the Loss & LAE & Fixed Expense Ratio that balances our premium.

VPLR = 1.0 − Variable Expense % − Target Profit % = 1.0 − −

The total permissible loss ratio (PLR) is the percentage of each premium dollar that is intended to pay for
projected loss and LAE (with the remaining portion intended to pay for variable expenses, fixed expenses, and
profit). Said another way, it is the Loss & LAE Ratio that balances our premium.

PLR = 1.0 − Total Expense % − Target Profit % = 1.0 − − −

The variable permissible loss ratio is of particular interest since it is the denominator of both of our rate
indication formulas (the first is on page 1 of this chapter while the other will be covered in the next chapter).

If all expenses are treated as variable the VPLR and PLR are equal. If ULAE is treated as an underwriting expense
rather than included with loss, the VPLR and PLR will include ULAE in V (as in Appendix C).

Spring 2020 © James Bedford, FCAS Page 175


Basic Ratemaking Chapter 8: Overall Indication

Basic Ratemaking

Chapter 8 – Overall Indication


Introduction

As discussed extensively thus far, the goal of ratemaking is to balance the fundamental insurance equation in
the prospective period that the new rates will be in effect.

Premium = Losses + LAE + UW Expenses + UW Profit

The prior chapters have discussed the adjustments needed to project each piece of the fundamental insurance
equation to the levels of the projected period (the one exception being premium which is at current rate levels
but otherwise adjusted to the prospective period). This chapter covers how to calculate the rates needed to
balance the fundamental insurance equation (pure premium method) or the rate change needed to be in
balance (loss ratio method).

It is important to note that we are setting the overall average rate level which will be balanced in aggregate. In
Chapters 9-11 we will discuss how to balance individual classifications, while Chapter 14 covers how to
calculate final rates given the overall rate indication and individual classification adjustments.

Pure premium method

Good news! We have already calculated the pure premium method rate indication in Chapter 7, so this section
will be mostly review. The pure premium method is a bit simpler than the loss ratio method, and it provides
and indicated average rate, not an indicated rate change like the loss ratio method.

Using the notation defined previously, and values adjusted to the prospective period, we can solve for the
average premium needed to balance the fundamental insurance equation during the period the new rates will
be in effect:

= + + +

= + + + ∗ + ∗

−( + ) = + +

(1 − − )= + +

+ +
=
(1 − − )

( + + )/
/ =
(1 − − )

+ +
=
(1 − − )

In words:

Pure Premium (including LAE) + Fixed UW Expense $ per Exposure


Indicated Average Rate =
1.0 − Variable Expense % − Target UW Profit %

The above assumes there are both fixed dollar underwriting expenses ( ) and variable underwriting expenses
(V) (if all expenses are assumed to be variable just set = 0). Assuming we are told the average expected loss

Spring 2020 © James Bedford, FCAS Page 177


Basic Ratemaking Chapter 8: Overall Indication

and LAE ( + ) is $325, the fixed expense ( ) is $30, the variable expense ( ) is 12%, and the profit
provision ( ) is 6%, we can calculate the indicated average premium in the prospective period as:

+ + $325 + $30
= = = $432.93
(1 − + ) (1 − .12 − .06)

The pure premium method is particularly useful for a new company (or new line of business) where there is
no historical data or existing rates to change since the actuary can still estimate expected pure premium,
expenses, and target profit provision using external data and actuarial judgement.

Loss ratio method

The loss ratio method is more commonly used than the pure premium method. The output of the loss ratio
method is an indicated rate change factor, not a rate itself like the pure premium method. The deviation of the
loss ratio method formula is only slightly more difficult than the pure premium, and shouldn’t be much harder
to comprehend once you understand the pure premium method.

The loss ratio method derivation in the text is a bit odd and arbitrary in my opinion. I will present it the way I
find most intuitive, but I do recommend you review the text’s calculation as well (it should be a little easier to
follow after seeing my explanation). The key difference is we want an indicated rate change, not rate. We can
calculate this indicated rate change by first recognizing that the indicated premium level PI is equal to the
current premium level PC time the rate adjustment factor 1+Δ; = ∗ (1 + ). We will start the derivation
exactly the same as the pure premium method. We could substitute ∗ (1 + ) for initially, but let’s do
some algebraic rearrangement first to keep things tidy.

= + + +

= + + + ∗ + ∗

−( + ) = + +

(1 − − )= + +

+ +
=
(1 − − )

To this point the calculation is exactly the same as the pure premium method calculation. Now that we have
isolated we will substitute = ∗ (1 + )

+ +
∗ (1 + ) =
(1 − − )

Dividing by on both sides and remembering / = we find the final loss ratio method indication formula:

( + + )/ ( + )/ +
(1 + ) = =
(1 − − ) (1 − − )

In words:

Loss & LAE Ratio + Fixed UW Expense Ratio


Indicated Rate Change (factor) =
1.0 − Variable Expense % − Target UW Profit %

where both ratios in the numerator are at current rate levels.

The output is in factor form. The indicated rate change is the factor minus 1.0, or (be sure to give your answer
in the form asked for in the question on the exam).

Spring 2020 © James Bedford, FCAS Page 178


Basic Ratemaking Chapter 8: Overall Indication

Assuming we are told expected loss and LAE ratio ([ + ]/ ) is 72.0%, the expected fixed expense ratio (F)
is 8.0%, the variable expense ( ) is 10%, and the profit provision ( ) is 5%, we can calculate the indicated
rate change for the prospective period as:

( + )/ + . 72 + .08
ℎ = − 1.0 = − 1.0 = −5.88%
(1 − − ) (1 − .10 − .05)

Since the loss ratio method requires current rates it is inappropriate for new companies or lines of business.

Loss ratio versus pure premium method

Comparison of approaches

The underlying loss measure used (dollars or loss ratio), and the formula output (rate v. rate change) are the
two major differences between the methods.

The loss ratio method measures losses and expenses relative to premium at current rate levels while the pure
premium method measures loss, LAE, and fixed expenses to exposures. This means the loss ratio approach
requires on-level premium while the pure premium method requires clearly defined exposures. If premiums
are unavailable, or significant changes make calculating on-level premium impractical or inaccurate, we will
want to use the pure premium method. Otherwise the loss ratio approach is the default method used by most
companies. Additionally, if total exposure data is hard to accurately measure (e.g., commercial general liability
uses differing exposure bases for each sub coverage) than using the loss ratio method is the clear choice.

Equivalency of methods

As I have derived the equations it should be obvious they are mathematically equivalent (this isn’t so obvious
how the text derives them). As such, the text shows they are equivalent by starting with the loss ratio method
and deriving the pure premium method. I will show this calculation to enhance you understanding. Starting
with the loss ratio method formula and working backwards:

( + )/ + ( + + )/
(1 + ) = =
(1 − − ) (1 − − )

Remembering (1 + ) =

( + + )/
=
(1 − − )

Multiplying / through each side

( + + )/
=
(1 − − )

+ +
=
(1 − − )

which is the pure premium rate indication formula (refer to the notation if that last step confuses you, we are
just changing notation for those divided by X). Therefore, the two are equivalent.

Appendices A-D show overall rate indications calculated in various fashions. Reviewing these in detail at this
time would be a good idea. Each of the previous chapters discussed adjusting each piece of the fundamental
insurance equation to the projected period—selecting the expected amount of each piece can vary by company
or situation. Sometimes we will want to select an average of the years adjusted to the prospective period, other
times we may want to take a weighted average, possibly giving more weight to recent years. The appendices
cover various examples of this.

Spring 2020 © James Bedford, FCAS Page 179


Basic Ratemaking Chapter 9: Traditional Risk Classification

Basic Ratemaking

Chapter 9 – Traditional Risk Classification


Introduction

Until this point we have been focused on balancing the fundamental insurance equation in aggregate. Now we
will focus on balancing individual risks or segments (classes).

Some very large commercial risks have enough individual experience that we can use their own data in
estimating premium, using the techniques we will see in Chapter 15. For most risks, especially in personal lines,
we cannot set rates using solely the individual risks historical experience. In these situations we want to group
individual risks with similar loss potential and set rates for each group. This process is called classification
ratemaking (i.e., setting rates by classification expected losses rather than by individual).

To determine the appropriate groupings, we must determine what factors (rating variables) cause expected
losses to vary, determine the appropriate levels/splits within these rating variables, and then determine the
amount by which expected losses vary by each level (relativities). For example, we may determine that location
(territory) causes losses to vary, and we determine 5 territories optimize our rating structure with the
following variation in expected loss (relative to the base territory 2):

Loss
Territory Relativity
1 0.800
2 1.000
3 1.100
4 1.050
5 0.900

Each step of this process will be covered in more depth later in this chapter.

The first portion of this chapter covers characteristics of classification ratemaking in general, while the end
covers univariate classification techniques (Chapter 10 covers multivariate techniques).

Importance of equitable rates

Balancing the fundamental insurance equation is necessary, but not sufficient to ensure fairness and financial
soundness. If a company charges each risk the same overall average rate, then lower cost risks will be
subsidizing higher cost risks, and competitors could reflect differing risk levels appropriately leaving the
company subject to adverse selection.

Adverse selection

If a company does not properly reflect cost differential between classes, and competitors do, they may be
subject to adverse selection— losing favorable risks while attracting unfavorable risks. I’ll illustrate this by
looking at an extreme example where there are 5 classes (A-E) that have varying expected losses. In this
example the “simple” company does not reflect these differences and charges each class the overall average
rate, while the “refined” company is able to perfectly differentiate between the costs of each class (for this
example we will assume we know the true costs of each class). Assuming each company starts with 1,000
policies from each class, the initial distribution, rates, and excess/shortfall profit is as follows:

Spring 2020 © James Bedford, FCAS Page 181


Basic Ratemaking Chapter 9: Traditional Risk Classification

Beginning Distribution
Refined Company Simple Company
True Excess/ Excess/
Class Rate Policies Rate (Shortfall) Policies Rate (Shortfall)
A $150 1,000 $150 $0 1,000 $250 $100,000
B $200 1,000 $200 $0 1,000 $250 $50,000
C $250 1,000 $250 $0 1,000 $250 $0
D $300 1,000 $300 $0 1,000 $250 ($50,000)
E $350 1,000 $350 $0 1,000 $250 ($100,000)
Total/Average 5,000 $250 $0 5,000 $250 $0

The excess/shortfall profit is the number of policies by class multiplied by the difference between the charged
rate and true rate (this is the amount excess or short of target profit, not absolute profit). For example, the
Simple Company excess income from Class A is 1,000(250-150) = $100,000. Each company is in balance overall
at the beginning of the period, but this is not stable because insureds will switch companies overtime. Assuming
20% of policyholders shop around after each period, and they select their insurer based on cost, the
distribution after the first year will look like the following:

After First Year Distribution


Refined Company Simple Company
True Excess/ Excess/
Class Rate Policies Rate (Shortfall) Policies Rate (Shortfall)
A $150 1,200 $150 $0 800 $250 $80,000
B $200 1,200 $200 $0 800 $250 $40,000
C $250 1,000 $250 $0 1,000 $250 $0
D $300 800 $300 $0 1,200 $250 ($60,000)
E $350 800 $350 $0 1,200 $250 ($120,000)
Total/Average 5,000 $238 $0 5,000 $250 ($60,000)

New Rate $262

Since the simple company overpriced classes A-B and underpriced D-E to begin with they will loose profitable
policies from classes A-B but attract unprofitable policies from classes D-E. Due to being adversely selected
upon the simple company will have an income shortfall. Further, their current average rate will now be
inadequate due to the change in distribution towards higher risk insureds. They will need to increase their rate
going into the next period (to the weighted average true cost, using the new policy distribution).

When the simple company sets their rates to $262 they will now lose insureds from classes A-C, while attracting
insureds from classes D-E, again resulting in an income shortfall, and inadequate overall average rate due to
further distribution shifts.

After Second Year Distribution


Refined Company Simple Company
True Excess/ Excess/
Class Rate Policies Rate (Shortfall) Policies Rate (Shortfall)
A $150 1,360 $150 $0 640 $262 $71,680
B $200 1,360 $200 $0 640 $262 $39,680
C $250 1,200 $250 $0 800 $262 $9,600
D $300 640 $300 $0 1,360 $262 ($51,680)
E $350 640 $350 $0 1,360 $262 ($119,680)
Total/Average 5,200 $229 $0 4,800 $262 ($50,400)

New Rate $273

Spring 2020 © James Bedford, FCAS Page 182


Basic Ratemaking Chapter 9: Traditional Risk Classification

This process will continue until the simple company refines their rating segmentation and improves pricing
accuracy, they become insolvent, or choose to write only high-risk insureds while charging an appropriate rate.
The speed with which this process will occur depends on how much price influences insureds purchasing
decisions, and how full and accurate customer knowledge of competitor’s rates is.

Favorable selection

In the prior example, the refined company is experiencing favorable selection since they were able to more
accurately price their policies than their competition. Since they had an appropriate rate for each class they
were able to write each profitably (remember, we showed the excess profit, not the profit priced into the rates).
In practice this can occur if we identify a predictive rating variable that others are not using, or are able to more
accurately price each class (i.e., more accurately estimate each relativity).

Alternatively, if we cannot (or choose not to) implement the new or refined rating variable, we can use it to
select risks to underwrite from the existing population of risks (i.e., write better risks, reject worse risks). This
is called “skimming the cream” and will also increase profitability.

Criteria for evaluating rating variables

Prior to determining rate differentials, we must first decide what rating variables we will use to segment
insureds into classes with similar expected losses. To do this we will examine statistical, operational, social,
and legal criteria.

Statistical criteria

Rating variables should capture differences in expected losses between groups of insureds. We would like our
rating variables to be:

1) Statistically significant
2) Homogenous
3) Credible

Statistically significant rating variables should identify differences in expected losses between groups of risks
while exceeding an acceptable level of statistical confidence, and being stable year over year. If groups within
a rating variable have no variation in expected losses, or the results are very volatile over time, the rating
variable likely is not useful.

Our rating variables should separate risks into homogenous (similar) groups with similar expected losses. We
want to create groupings that are homogenous within group, but heterogenous (different) between group with
respect to expected losses. Remember, we are targeting expected losses— individual actual losses can be
subject to a large degree of randomness/noise in the short run. If groups contain significantly different
expected losses we should further subdivide groups to create more homogeneity within groups.

The numbers of risks within each group should be credible enough to accurately estimate expected losses.
There is a balancing act between homogeneity and credibility of groups—creating larger groups will increase
credibility while reducing homogeneity. If groups lack the credibility to accurately estimate expected loss we
should combine groups (while balancing homogeneity concerns).

Operational criteria

In addition to being statically significant, we also want our variable to be practical, so we will examine the
following operational criteria:

1) Objective
2) Inexpensive
3) Verifiable

Spring 2020 © James Bedford, FCAS Page 183


Basic Ratemaking Chapter 9: Traditional Risk Classification

Our rating variables should have objective definitions. For example, for personal auto, driver skill would be a
nice variable to have, but it is very subjective (who decides skill level, and using what criteria?). Instead we use
proxies for driver skill (e.g., age, accident history, etc.).

We also need the cost of obtaining information on our rating variables to be inexpensive—at least relative to
their benefit. For example, if the costs of inspecting insured items for hard to verify characteristics is very costly
it doesn’t make sense to include those characteristics as a rating variable (unless the benefit of doing so is
greater than the cost).

Rating variables should be verifiable (i.e., not subject to untruthful insureds or distribution channels). For
example, miles driven may be very predictive in auto insurance, but is not easily verified (historically; this is
obviously changing with technological advances).

Social criteria

Selection of rating variables should take into account public perception (i.e., these are desirable but not
necessarily required characteristics). The social criteria we should consider include:

1) Affordability
2) Causality
3) Controllability
4) Privacy

It is desirable that insurance is affordable for all risks—especially if the insurance is required by law (e.g.,
workers compensation in the US), required by a third party (e.g., lenders requiring homeowners insurance), or
very desirable (e.g., commercial general liability for a small business owner). In situations that insurance is
unaffordable for a certain group (e.g., costal property, or 16-year-old male drivers) the insurance company may
decide to combine groups (resulting in the lower cost group subsidizing the higher cost group), or offer less
coverage to lower rates.

If there is a causal nature between the rating variable and expected loss the rating variable will be more socially
acceptable. There has been consumer backlash against using credit data to rate insurance policies because
despite the well documented predictive power, in the minds of many people there is not an obvious causal
connection between credit data and expected insurance losses.

Ideally, rating variables should be controllable by the insured (this does not mean manipulatable, or able to be
lied about). For example, adding a security system and fire alarm is controllable and has an effect on property
expected losses and rates. Some very commonly used rating variables are not controllable though, most notably
age and gender.

Rating variables should not invade customers privacy. For example, when technology is standard on all vehicles
that could give the insurance company information on driving patterns and location history, the insurance
company should only use it with the customers consent.

Legal criteria

Above any actuarial consideration, our rating structure must first and foremost follow any legal requirements.
Easy to follow and common requirements are that rates are “not excessive, not inadequate, nor unfairly
discriminatory” and are “actuarily sound”. In other cases, laws may dictate what can be used as a rating variable
(e.g., not allowing the use of gender), or place other restrictions on the use of certain items (e.g., some
jurisdictions don’t allow credit data for rating purposes, but do for underwriting/risk selection/placement
decisions). The actuary must be familiar with the laws and regulations of the jurisdictions he/she is working
(generally through working with lawyers or compliance experts).

Spring 2020 © James Bedford, FCAS Page 184


Basic Ratemaking Chapter 9: Traditional Risk Classification

ASOP 12: Risk Classifications

This actuarial standard of practice (ASOP) provides guidance to actuaries when performing professional
services with respect to designing, reviewing, or changing risk classification systems. Approaches to risk
classification can vary significantly and it is appropriate for the actuary to exercise considerable professional
judgment when providing such services, including making appropriate use of statistical tools. This ASOP is
intended to provide guidance to assist the actuary in exercising professional judgment when applying various
acceptable approaches.

In addition to the topics covered above, this ASOP adds, when selecting which risk characteristics to use in a
risk classification system, the actuary should also consider:

1) Industry Practices— the actuary should consider usual and customary risk classification practices for
the type of financial or personal security system under consideration.
2) Business Practices— the actuary should consider limitations created by business practices related to
the financial or personal security system.
3) Intended Use— the actuary should select a risk classification system that is appropriate for the
intended use. Different sets of risk classes may be appropriate for different purposes. For example,
when setting reserves for an insurance coverage, the actuary may choose to subdivide or combine
some of the risk classes used as a basis for rates.
4) Reasonableness of Results— when establishing risk classes, the actuary should consider the
reasonableness of the results that proceed from the intended use of the risk classes (for example, the
consistency of the patterns of rates, values, or factors among risk classes).

Testing the Risk Classification System⎯ Upon the establishment of the risk classification system the actuary
should, if appropriate, test the long-term viability of the financial or personal security system:

1) Effect of Adverse Selection— the actuary should assess the potential effects of adverse selection that
may result or have resulted from the design or implementation of the risk classification system.
2) Risk Classes Used for Testing—the actuary should consider using a different set of risk classes for
testing long-term viability than was used as the basis for determining the assigned values if this is
likely to improve the meaningfulness of the tests.
3) Effect of Changes⎯ if the risk classification system has changed, or if business or industry practices
have changed, the actuary should consider testing the effects of such changes.

The actuary should consider performing quantitative analyses of the impact of the following, to the extent
they are generally known and reasonably available to the actuary, as well as disclose them in the actuarial
communications:

1) significant limitations due to compliance with applicable law;


2) significant departures from industry practices;
3) significant limitations created by business practices;
4) a determination by the actuary that experience indicates a significant need for change; and
5) expected material effects of adverse selection;

Spring 2020 © James Bedford, FCAS Page 185


Basic Ratemaking Chapter 9: Traditional Risk Classification

Determination of indicated rate differentials (univariate classification)

Now that we have decided on which rating variables to use we must estimate the variation of expected loss
between each level within the rating variables. This chapter will cover three univariate rating methods (i.e.,
examining each variable in isolation). As we will see, univariate methods suffer from some large biases—
multivariate rating techniques (covered in Chapter 10) are now the industry standard.

To illustrate these methods, we will examine an example using the data below. In our simplified example there
are only two rating variables (marital status and prior accidents) each with three levels.

The exposure distribution by rating variable is given as follows:

Marital Prior Accidents Marital Prior Accidents


Status 0 1 2+ Total Status 0 1 2+ Total
Single 500 175 120 795 Single 20% 7% 5% 31%
Married 950 150 90 1,190 Married 37% 6% 4% 47%
Divorced 375 125 50 550 Divorced 15% 5% 2% 22%
Total 1,825 450 260 2,535 Total 72% 18% 10% 100%

The (unknown) true underlying relativities (what we are trying to estimate) and current relativities are as
follow:

True Currently True Currently


Prior (Unknown) Charged Marital (Unknown) Charged
Accidents Relativity Relativity Status Relativity Relativity
0 0.7500 0.8250 Single 1.1500 1.0950
1 1.0000 1.0000 Married 1.0000 1.0000
2+ 1.2500 1.1900 Divorced 1.0500 1.1050

The exposure, premium, and loss data by segment is summarized below:

Premium @
Marital Prior Loss & Current Rate
Status Accidents Exposure ALAE Level
Single 0 500 $94,897.54 $146,697.24
Married 0 950 $156,787.24 $254,543.15
Divorced 0 375 $64,984.18 $111,027.71
Single 1 175 $44,285.52 $62,235.19
Married 1 150 $33,007.83 $48,716.39
Divorced 1 125 $28,881.86 $44,859.68
Single 2+ 120 $37,959.02 $50,783.92
Married 2+ 90 $24,755.88 $34,783.51
Divorced 2+ 50 $14,440.93 $21,353.21
Total 2,535 $500,000.00 $775,000.00

Spring 2020 © James Bedford, FCAS Page 186


Basic Ratemaking Chapter 9: Traditional Risk Classification

Pure premium approach

The pure premium approach for classification ratemaking is nearly identical to the pure premium method for
overall rate indication. The difference being, for classification ratemaking we apply the pure premium approach
separately by rating variable level.

First some notation; let 1 represent the relativity for the ith level of rating variable R1 (the 1 is to differentiate
between different rating variables, but we are only going to examine one at a time here). The rate for each level
i is then:

= 1 ∗

Adding the subscript I to represent the indicated rates, the indicated rate differentials can be calculated as:

,
1 , = =

Now we will simply use the pure premium rate indication formula separately, for each level (i) of the rating
variable being examined, and the base level. The following formula may look a bit intimidating, but it is just the
pure premium rate formula for level i, divided by the same for the base level.

+ +
, (1 − − )
1, = =
, + +
(1 − − )

This method commonly assumes all underwriting expenses are variable (i.e., =0), and that variable
underwriting expenses (V) and target profit ( ) do not vary by level (thus the denominators cancel each
other), which simplifies the above equation to:

[ + ]
1 , = =
[ + ]

It is worth learning and remembering each (the first is easy once you know the pure premium formula). The
default will be to use the second equation, unless you are told that there are fixed expenses ( > 0) or that
variable underwriting expenses (V) and/or target profit ( ) differ by level (in fact you should hope they do
ask this since you will get it right while many others wouldn’t!).

It is important to note that you can calculate the relativities relative to the base class (as the formulas do above,
resulting in a base relativity of 1.0) or to the overall rate level (resulting in an exposure weighted total relativity
of 1.0—called “normalized” relativities as they are defined on a consistent basis rather than to the base level).
Normalized relativities are useful for making comparisons between current, indicated, and/or competitors
relativities as they will be on the same basis. Additionally, if we are to credibility weight indicated relativities
with some complement of credibility we must use normalized relativities. We will calculate them both ways in
the following example.

In practice it can be difficult to allocate ULAE to class, so we often use loss & ALAE for classification analysis
(rather than loss & LAE as the formulas above call for). For stable short tailed lines of business it may not be
necessary to trend and develop loss and ALAE, but for long tailed lines of business or when the portfolio is
growing or shrinking we will want to apply the trend and development factors from the aggregate analysis (not
class specific, unless there is reason to believe each does vary by class, and the benefit of such refinement is
worth the time and expense of doing so). We will also want to adjust our loss and ALAE for individual shock
and CAT losses as we do in the overall rate indication.

Using the data given previously, we can apply the pure premium method to the (number of) Prior Accidents
rating variable as follows:

Spring 2020 © James Bedford, FCAS Page 187


Basic Ratemaking Chapter 9: Traditional Risk Classification

Indicated Indicated Indicated True


Prior Loss & Pure Relativity Relativity to (Unknown)
Accidents Exposures ALAE Premium (to overall) Base Relativity
(i ) (1) (2) (3) (4) (5)
0 1,825 $316,668.96 $173.52 0.8797 0.7354 0.7500
1 450 $106,175.21 $235.94 1.1962 1.0000 1.0000
2+ 260 $77,155.83 $296.75 1.5045 1.2577 1.2500
Total 2,535 $500,000.00 $197.24 1.0000 0.8360

(1) & (2) are given


(3) = (2) / (1)
(4) = (3) / (3) Total
(5) = (3) / (3) Base or (4) / (4) Base

Comparing our indicated relativities to the true (unknown) relativities we can see level 0 is too low while level
2+ is too high. Examining the exposure distribution, we can see this is because level 0 exposure is skewed
towards lower cost married drivers, while level 2+ is skewed towards higher cost single drivers and away from
low cost divorced drivers. The following table is the distribution across marital status by prior accident,
calculated from the given exposure data, showing these characteristics (you wouldn’t need to show this on the
exam).

Marital Prior Accidents


Status 0 1 2+
Single 27% 39% 46%
Married 52% 33% 35%
Divorced 21% 28% 19%
Sum 100% 100% 100%

Applying the same pure premium methodology to the Marital Status rating variable, we calculate the following
rate level relativities:

Indicated Indicated Indicated True


Marital Loss & Pure Relativity Relativity to (Unknown)
Status Exposures ALAE Premium (to overall) Base Relativity
(i ) (1) (2) (3) (4) (5)
Single 795 $177,142.08 $222.82 1.1297 1.2359 1.1500
Married 1,190 $214,550.95 $180.29 0.9141 1.0000 1.0000
Divorced 550 $108,306.97 $196.92 0.9984 1.0922 1.0500
Total 2,535 $500,000.00 $197.24 1.0000 1.0940

(1) & (2) are given


(3) = (2) / (1)
(4) = (3) / (3) Total
(5) = (3) / (3) Base or (4) / (4) Base

Both the Single and Divorced relativities are overstated compared to their true values. Looking at the exposure
distribution we can see that the base level, Married drivers, skews towards lower cost 0 Prior Accidents, while
both Single and Divorces skew more towards higher cost 1 and 2+ Prior Accidents. The distribution across
Prior Accident levels by Marital Status is calculated as follows (again, for illustrative purposes only):

Spring 2020 © James Bedford, FCAS Page 188


Basic Ratemaking Chapter 9: Traditional Risk Classification

Marital Prior Accidents


Status 0 1 2+ Sum
Single 62.9% 22.0% 15.1% 100%
Married 79.8% 12.6% 7.6% 100%
Divorced 68.2% 22.7% 9.1% 100%

The pure premium method assumes the distribution of exposures across all other rating variables is equal. To
the extent exposures are correlated, the method will be distorted and will double count the effects of higher or
lower losses as the effects will be picked up by both variables. Note, the number of exposures does not need to
be equal, just their distributions (e.g., 100, 50, and 10 exposures across a given level has the same distribution
as 10, 5, and 1 and wouldn’t be correlated).

Loss ratio approach

The pure premium classification approach results in a rate relativity (similarly to how the pure premium
approach results in a rate at the aggregate level). The loss ratio classification approach will result in an
indicated change to the current relativity (similarly to how the loss ratio approach results in a change factor at
the aggregate level). The loss ratio approach also incorporates premium data that has been brought on-level
for each class—ideally via extension of exposures as to capture the different rate changes by class rather than
an aggregate adjustment like the parallelogram method (though you could use the parallelogram method by
class, which could be tedious). On the exam this on-level premium by class would be given (or the calculation
would be simple).

Like the pure premium method, the loss ratio method is simply the loss ratio rate indication formula applied to
each level (i) divided by the same applies to the base level.

( + )/ +
(1 + ) (1 − − )
ℎ = =
(1 + ) ( + )/ +
(1 − − )

Again, assuming all underwriting expenses are variable (i.e., F=0) and do not vary by class (Vi = VB), and target
profit margin does not vary by class, the indicated change simplifies to the ratio of the loss ratios at level i and
base level—this will be the default assumption on the exam, unless told otherwise.

( + )/ , &
ℎ = =
( + ) / , &

If there are fixed expenses, or variable expenses and/or target profit margin vary by class, use the longer
formula above (if there are fixed expenses but V and don’t vary by class, the two denominators still cancel
but is left in both, even if the same).

Remembering the result above is an indicated change, we can find the indicated relativity as the product of the
current relativity ( 1 , ) and the indicated change:

1, = ℎ ∗ 1 ,

Spring 2020 © James Bedford, FCAS Page 189


Basic Ratemaking Chapter 9: Traditional Risk Classification

Continuing our example, using the loss ratio approach for prior accidents we find the following indicated rate
relativities:

Indicated
Loss & Relativity Indicated True
Prior Loss & ALAE Change Current Relativity to (Unknown)
Accidents OLEP ALAE Ratio (to base) Relativity Base Relativity
(i ) (1) (2) (3) (4) (5) (6)
0 $512,268.10 $316,668.96 61.8% 0.9072 0.8250 0.7484 0.7500
1 $155,811.26 $106,175.21 68.1% 1.0000 1.0000 1.0000 1.0000
2+ $106,920.64 $77,155.83 72.2% 1.0590 1.1900 1.2602 1.2500
Total $775,000.00 $500,000.00 64.5%

(1), (2), & (5) are given


(3) = (2) / (1)
(4) = (3) / (3) Base
(6) = (4) * (5)

The text example includes an additional step in the calculation that is only needed if you are going to be
credibility weighting the relativities—they calculate the indicated change relative to the overall Loss & ALAE
ratio. This results in indicated relativities that need to be re-based. This additional step is not necessary if we
are not credibility weighting the relativities as applying the indicated change relative to the base level is
mathematically equivalent. An example of credibility weighting the relativities is presented in Appendix E.

As compared to the true relativities, level 0 is too low, and 2+ is too high.

Examining Marital Status, we can see that both the Single and Divorce relativities are closer to the true
relativities than the pure premium method but are both still too high.

Indicated
Loss & Relativity Indicated True
Marital Loss & ALAE Change Current Relativity to (Unknown)
Status OLEP ALAE Ratio (to base) Relativity Base Relativity
(i ) (1) (2) (3) (4) (5) (6)
Single $259,716.35 $177,142.08 68.2% 1.0746 1.0950 1.1767 1.1500
Married $338,043.05 $214,550.95 63.5% 1.0000 1.0000 1.0000 1.0000
Divorced $177,240.60 $108,306.97 61.1% 0.9628 1.1050 1.0639 1.0500
Total $775,000.00 $500,000.00 64.5%

(1), (2), & (5) are given


(3) = (2) / (1)
(4) = (3) / (3) Base
(6) = (4) * (5)

The loss ratio approach measures loss relative to premium, rather than simply exposure as the pure premium
method does. To the extent premium captures differences in expected losses, our loss ratio method estimates
will be accurate. Unfortunately, our premiums are not going to capture all of differences in expected losses
between classes. For example, our Single insureds are underpriced and also correlated with 2+ accidents. As a
result, the indicated 2+ relativity is overstated because it is compensating for the underpriced variable it is
correlated with.

Spring 2020 © James Bedford, FCAS Page 190


Basic Ratemaking Chapter 9: Traditional Risk Classification

Adjusted pure premium approach

The adjusted pure premium approach is the same as the pure premium approach except we use exposures that
have been adjusted by the exposure weighted average relativity across all the other variables. This method uses
the current relativities to adjust the exposure base to reflect the differences in expected losses, to the extent
the current relativities capture the true relativities (similar to the loss ratio method).

We can calculate the weighted average relativity across each variable as done in the following table. For
example, the weighted average current relativity across 0 Prior Accidents equals (500*1.0950 + 950*1.0000 +
375*1.1050)/(500+950+375) = 1.0476.

Prior Accidents
Marital 0 1 2+ Wght
Status Curr. Rel. 0.8250 1.0000 1.1900 Avg Rel.
Single 1.0950 500 175 120 0.9186
Married 1.0000 950 150 90 0.8747
Divorced 1.1050 375 125 50 0.8980
Wght Avg Rel. 1.0476 1.0661 1.0640

We can then use these weighted average relativities to adjust our exposures and proceed with the pure
premium approach as normal. Looking at both variables separately we can see this method results in the same
relativities as the loss ratio method (since we used the same relativities to adjust both premium and exposure).
Each of the two methods adjusts for distributional bias/correlations but only to the extent these
biases/correlations are properly reflected in current relativities.

Wtd Avg Indicated Indicated Indicated


Prior Marital Stat. Adjusted Loss & Pure Relativity Relativity to
Accidents Exposures Relativity Exposures ALAE Premium (to overall) Base
(i ) (1) (2) (3) (4) (5) (6) (7)
0 1825 1.0476 1911.88 $316,668.96 $165.63 0.8839 0.7484
1 450 1.0661 479.75 $106,175.21 $221.31 1.1811 1.0000
2+ 260 1.0640 276.65 $77,155.83 $278.89 1.4883 1.2602
Total 2535 2668.28 $500,000.00 $187.39 1.0000 0.8467

Wtd Avg Indicated Indicated Indicated


Marital Prior Acc. Adjusted Loss & Pure Relativity Relativity to
Status Exposures Relativity Exposures ALAE Premium (to overall) Base
(i ) (1) (2) (3) (4) (5) (6) (7)
Single 795 0.9186 730.30 $177,142.08 $242.56 1.0988 1.1767
Married 1190 0.8747 1040.85 $214,550.95 $206.13 0.9338 1.0000
Divorced 550 0.8980 493.88 $108,306.97 $219.30 0.9934 1.0639
Total 2535 2265.03 $500,000.00 $220.75 1.0000 1.0709

(1) & (4)= Given


(2) = Calculated Separately
(3) = (1) * (2)
(5) = (4) / (3)
(6) = (5) / (5) Total
(7) = (6) / (6) Base

Spring 2020 © James Bedford, FCAS Page 191


Basic Ratemaking Chapter 9: Traditional Risk Classification

If there are many rating variables, calculating a weighted average relativity may not be practical. In such cases
we may want to focus only on the rating variables we believe have a distributional bias across the levels of the
rating variable being analyzed.

Appendix E shows additional univariate relativity calculations. You should review it at this point. It does include
credibility weighting, which is covered more in Chapter 12, but the credibility calculation should be easy
enough to follow at this point. As mentioned earlier, we must use normalized relativities when credibility
weighting relativities, as done in the Appendix. The Appendix E examples also calculated the revenue neutral
change (i.e., percent change with off-balance)—this concept will be covered in Chapter 14.

The primary weakness of these univariate methods is that they do not accurately account for the effects other
variables have on the variable being evaluated. If the evaluated variable is correlated with other variable(s)
then the results will be distorted. If for example, younger drivers tend to drive older cars, the younger driver’s
high frequency will distort the frequency of the older car indicated relativity. The loss ratio and adjusted pure
premium methods partially adjust for these correlations, but only to the extent the true differences are
reflected in the current relativities.

If we make changes to the rate relativities the total premium collected may change. In this case we need to
adjust our base rate to account for this change in premium. This is referred to as base rate offsetting and is
covered in Chapter 14.

Spring 2020 © James Bedford, FCAS Page 192


Basic Ratemaking Chapter 10: Multivariate Classification

Basic Ratemaking

Chapter 10 – Multivariate Classification


Introduction

To more equitably price risks and avoid adverse selection, insurance companies implement classification
ratemaking. If they are able to more accurately estimate expected loss differentials they may experience
favorable selection. Additionally, accurate classification rates allow the insurer to write more risks profitably.

This chapter will cover some advancements to univariate methods (minimum bias procedures and other
sequential approaches) before covering multivariate rating techniques, of which generalized linear models
(GLMs) are a subset.

Advancements to univariate techniques

Minimum bias procedures

Minimum bias procedures were popular in the latter half of the prior century. They are essentially iterative
univariate approaches. There are different forms of the model, but each defines the style of rating structure
(e.g., additive, multiplicative, or both) and the statistic, or bias function, used to measure the accuracy of the
model (e.g., least squares, chi-square, maximum likelihood, or balance principle).

We then iteratively solve for the parameters that minimize the bias function. The text gives an example using
the balance principle bias function which sets the weighted loss cost of the observed data equal to the product
of the exposures, parameters, and base rate by iteratively solving for the parameters. I am not going to cover
the mechanics here since a computational question is unlikely to be asked (and a poor choice of topic to test in
my opinion), but there is a good example in the text if you wish to review (it would be very far down my list of
things to know, and would only be concerned with it once you are very comfortable with everything else,
otherwise your time is likely better spent elsewhere). One thing perhaps worth noting for the exam is if we re-
base our final parameters into relativities, we must correspondently adjust our base rate in the opposite
direction or else the equations will no longer be in balance.

It should be noted that while the minimum bias procedures are improvements over the univariate techniques,
they are not a true multivariate analysis. They also lack many of the desirable traits of multivariate analysis
that are covered later in this chapter. In some specific forms, the minimum bias method converges to the GLM
result though.

Sequential analysis

In sequential analyses we perform the standard one-way univariate analysis on the first variable selected then
use the resulting indicated relativities to adjust the exposures used for the next parameter (using the adjusted
pure premium approach), whose indicated relativities are used to adjust the exposures of the next variable
selected, and so on until all variables have been evaluated. Unlike the minimum bias procedures (which iterates
until convergence), we only perform one iteration per variable when using sequential analysis. This is the
rating methodology currently mandated for use in personal automobile insurance in the State of California.

The largest weakness of sequential analysis is that your solution will vary depending on the order you choose
to evaluate the rating variables (there is no closed form solution).

Multivariate analysis

As the name implies, multivariate analysis analyzes multiple variables are the same time, therefore accounting
for exposure correlations overlooked by univariate methods.

Spring 2020 © James Bedford, FCAS Page 193


Basic Ratemaking Chapter 10: Multivariate Classification

Adoption of multivariate methods

The three largest changes leading to the adoption of multivariate rating methods in recent decades were the
vast increase in computing power, improved data availability/warehouses, and most importantly competitive
pressure to keep up with early adopters (to avoid adverse selection). These revolutions took place roughly in
the 1990’s in the U.K. personal lines markets and 2000’s in the U.S. personal lines markets (many commercial
lines of business’ do not lend themselves well to multivariate analysis and thus use alternative methods as we
will see in Chapter 15).

Benefits of multivariate methods

The four largest benefits of multivariate methods (general, not necessarily GLM specific) are:

1) The primary benefit of multivariate methods is they consider all variables at the same time, thus
properly adjusting for any exposure correlations present.
2) Multivariate methods also attempt to separate the signal (systematic effects) from the noise
(unsystematic effects)—univariate methods pick up both the signal and the noise.
3) Multivariate methods produce model diagnostics which can be used to assess the fit and
appropriateness of the model.
4) Multivariate methods can model interactions—when one variable varies by another (this is different
than exposure correlations). An example interaction is age and gender in personal auto—at 45 years
of age males and females may be similar risks, but at age 16 males are significantly more risky than
females, so the gender variable is different by age (an exposure correlation would be that younger
drivers tend to drive older cars—note one has to do with exposures while the other has to do with
expected losses).

GLMs are a subset of multivariate models, so each of the above applies. In addition, GLMs have the benefit that
they are more transparent than many other multivariate methods. GLMs produce parameters and statistical
diagnostics for each explanatory variable modeled.

Univariate methods do not adjust for exposure correlations, they pick up both the signal and the noise, and do
not provide any statistical diagnostics. While minimum bias procedures are transparent and more accurate
than univariate methods, they are computationally intensive, do not provide statistical diagnostics, and are less
accurate than multivariate methods.

GLMs

Generalized linear models are now the industry standard for classification ratemaking in many countries. GLMs
are a special case of standard linear models. Standard linear models are multivariate, but have several
restraints that make them inappropriate for modeling insurance losses. Linear models are fit to predict the
response variable (Y) using explanatory/predictor variables (X) and parameters (B). Linear models take the
following form:

= + + + ⋯+ +

where is the error term which is assumed to be normally distributed with a mean of 0 and constant variance.

The model is solved by finding the parameters (B) that produces the observed data with the highest probability.
This is solved using maximum likelihood (in the specific case of linear models, the maximum likelihood solution
is equivalent to the solution found by minimizing the sum of squared errors). Due to the large volume of data,
in practice this is done using Newton-Raphson algorithms that find the maximum of a function by finding a zero
in the functions first derivative.

GLMs make adjustments to the standard linear model to remove the restriction that the error term is normally
distributed with a constant variance (the specifics and mechanics of these adjustments are beyond the scope
of this exam, but are covered in detail on Exam 8). GLMs also allow for a link function which defines the
relationship between the response variable (Y) and predictors (X). When modeling insurance loss data, a log

Spring 2020 © James Bedford, FCAS Page 194


Basic Ratemaking Chapter 10: Multivariate Classification

link function is commonly used as it results in multiplicative rating variables—as are standard for most
insurance products.

In addition to supplying a sufficient data set and defining the link function, we must also specify the distribution
underlying the random process in order to solve a GLM. We will want to use a distribution from the exponential
family (since the variance is a function of the mean)—Poisson is commonly used for frequency while Gamma
is commonly used for severity.

When using GLMs we will typically model loss data directly rather than loss ratios as is sometimes done when
using univariate models. This is because to model loss ratios we must on-level premium at the class level which
may be impractical. In addition, loss ratio models become obsolete once rating structures are changed, and
there is no commonly accepted distribution for modeling loss ratios. Due to loss ratios being dependent on
current rates, the actuary will not have a prior expectation of resulting relativities, which helps to decipher the
signal from the noise, as they do when modeling frequencies and severities. For those reasons we will almost
always model losses, rather than loss ratios, and in most cases frequency and severity separately.

The following is (made up sample) GLM output from a multiplicative model examining the Age factor’s effect
on personal auto collision coverage frequency. The result is a set of relativities—in this case the selected base
class is 46-55 (factor of 1.0). We want to select a base class that has a large exposure base so our results will be
more stable. We can see that the age group 22-25 is estimated to have a frequency that is 2.5 times as high as
the base class, all other variables considered (we must include all other variables of interest in the same GLM
model). The univariate one-way estimate is also shown—the large difference between the two suggests that
Age is correlated with another variable and the univariate results are distorted due to this correlation. The
graph also shows exposure on the secondary (right) axis, so we can visually see where the exposure lies within
each level of the variable.

Spring 2020 © James Bedford, FCAS Page 195


Basic Ratemaking Chapter 10: Multivariate Classification

Statistical diagnostics, practical tests, and model validation

Statistical significance was one of the important criteria for evaluating rating variables. One of the benefits of
GLMs are the statistical diagnostics available. We can use these statistics, as well as other practical tests, to
evaluate if each rating variable has a systematic effect on losses and should be included in our model (we will
examine each of these tests for each potential variable considered to determine if we should retain them in our
model or remove them).

In the prior section we saw a sample GLM output of the Age variable. Here I will take a step back and review
what we would have done prior to the modeling process to determine if that variable should have been included
in our model in the first place. There are primary tests we will use to make this decision (along with judgement):

1) Evaluating standard errors (horizontal line test)


2) Deviance testing
3) Consistency of estimate

A commonly used statistical diagnostic for evaluating the systemic effects of a variable is the standard errors
calculation. The graph below shows the GLM estimate as well as plus and minus two standard errors—which
can be thought of as roughly a 95% confidence interval. In the Age variable example, our range is relatively
narrow and our factors have a strong slope to them, suggesting the Age variable is statistically significant.

If our standard errors are wide and largely straddling 1.0, our variable is likely mostly picking up noise, thus
not statistically significant.

The next graph shows the GLM estimate, along with plus and minus two standard errors, of the Number of Pets
variable which was originally included in my (imaginary) model. This test on the Number of Pets variable would
suggest there is no systemic effect from this variable on personal auto collision frequency, and it should be
excluded from the model. This should also be obvious judgmentally since intuitively the number of pets in the
home should not affect personal auto collision frequency.

Spring 2020 © James Bedford, FCAS Page 196


Basic Ratemaking Chapter 10: Multivariate Classification

Looking back at our Age graph, this test suggests that the variable estimates are not significantly different
starting at age 55 since the same value can be included within each standard error range (i.e., you can draw a
horizontal line that does not cross the standard errors). This would suggest while the variable is clearly very
predictive, we may want to further group some of the older age groups.

The second test measures deviance—a measure of goodness of fit that can be expressed as a single value. This
test will select a deviance measure (e.g., chi-square, F-test, AIC, BIC) which will measure goodness of fit while
also penalizing for additional variables since, all else equal, we want less variables in our model (i.e., an added
variable must add to the predictive power sufficiently to justify being included). We will then measure the
deviance of the model including and excluding the variable being evaluated—if the deviance measure is better
including the variable, then the test suggests we should keep it in the model. When using Chi-Square as the
measure, values between 5% and 30% are often thought to be inconclusive, while values below 5% indicate
the additional variable should be retained in the model, while values above 30% indicate the additional variable
should be excluded from the model.

The third test evaluates the variables consistency. If we have multiple years we may choose to evaluate the
consistency of the estimate over time, otherwise we may select random subsets to compare. In our example we
have several years of data, so we will compare the Age estimates from each year 2013-2016. As we can see in
the following graph, the estimates are reasonably consistent by year despite a little bit of noise. This would
suggest keeping the variable in our model. Results that are not consistent would suggest too much variability
and exclusion from the model.

Spring 2020 © James Bedford, FCAS Page 197


Basic Ratemaking Chapter 10: Multivariate Classification

Additionally, we should use judgment in variable selection. In this example it is intuitive that personal auto
collision frequency would vary by driver age, so it passes the judgment test as well.

The final test discussed in the text is done after the modeling process is complete and used to validate the
results of the final model (remember, the first three tested if we should include a variable in our model). Prior
to the modeling process we will want to randomly exclude some of the available data from the modeled data
set so we can use it later to test and validate the model—this is called the holdout dataset. We will build our
model using the modeled data set, then compare the model’s results to the results of the holdout sample. The
more closely our model’s estimates match the holdout data set, the better our model validates.

The next graph shows the results of our model by variable, as compared to the observed values in the holdout
dataset. The results are very good, giving us confidence our model performs well (measuring and defining “very
good” is beyond the scope of the exam). Additionally, we could instead order each observation in the holdout
data set by predicted value from low to high (rather than by variable) and compare predicted values to
observed values (there is an example of this in Appendix F).

Spring 2020 © James Bedford, FCAS Page 198


Basic Ratemaking Chapter 10: Multivariate Classification

Large differences between predicted and holdout results may indicated that the model is over or underfitted.
If the model retains too many variables, thus modeling the unsystematic effects (noise), or over-specifies the
model, the result will be overfit—it will be good at predicting the modeled dataset itself (because it has the
same noise), but not at predicting future outcomes or the holdout set well. If underfit, the model is missing
important statistical effects (the signal, or systematic effects). An underfit model may predict the overall mean
reasonably well but will not give us much additional useful information about differences between levels.

The following are rough visual representations of over and underfitting.

Practical Considerations

In practice, GLMs are developed using statistical software such as R, SAS, or other commercially available
packages. Additional areas the actuary should focus attention include obtaining data with sufficient detail and
accuracy (to avoid GIGO: garbage in, garbage out), identifying anomalies worth exploring further, reviewing
results both statistically and with a business sense, and creating appropriate communication of results.

Spring 2020 © James Bedford, FCAS Page 199


Basic Ratemaking Chapter 10: Multivariate Classification

Other multivariate techniques

Other multivariate techniques (the text calls them “data mining” techniques) that can be used to supplement
our GLM are discussed very briefly. The primary enhancements these models can add to our modeling process
are:

1) Selecting the variables we want to include in our model.


2) Selecting the segments within rating variables (e.g., should we group ages 50-53 or 49-54, etc.).
3) Reducing the number of variables used by creating combined variables.
4) Identifying potential interactions between rating variables.
5) Picking up additional “signal” missed by the GLM.

Factor analysis (e.g., principle components)

Factor analysis reduces the number of variables needed in a classification analysis by using a linear
combination of the original variables. One example is the use of vehicle symbol, which is a combination of
vehicle characteristics such as vehicle weight, height, cylinders, horsepower, cost, etc. Another example would
be combining geo-demographic data such as population density, average age of home, homeownership rates,
etc. into a single variable.

Cluster analysis

Cluster analysis is used to combine segments into optimal groupings, or clusters. For example, we may want to
cluster postal (zip) codes into larger territories. The goal is to create groups that minimize within group
variance, while maximizing between group variance (i.e., homogenous within groups, heterogenous between
groups). The grouped segments can then be an input into our model.

CART (classification and regression trees)

CART models create if-then like decision trees that can help improve classification through identifying the
strongest list of initial variables, helping to determine levels within each variable, and in identifying potential
interactions between variables.

MARS (multivariate adaptive regression spline)

MARS models are used to select breakpoints for categorizing continuous variables into classification buckets,
and in identifying potential interactions between variables.

Neural networks

Neural networks are advanced models that use training algorithms to automatically learn the structure of the
data. In the insurance world they are often used to pick up additional “signal” in the data that is missed by the
primary GLM.

Supplementing the modeling process with external data

Valuable external data that can be combined with the company’s insurance data for loss modeling purposes
includes:

1) Geo-demographic data such as population density or crime information.


2) Weather data such as average rainfall or number of days below freezing.
3) Property characteristics such as square footage, fire department response time/quality.
4) Information about the insured or business, such as credit information or occupation.

Spring 2020 © James Bedford, FCAS Page 200


Basic Ratemaking Chapter 11: Special Classification

Basic Ratemaking

Chapter 11 – Special Classification


Introduction

There are certain classifications which have unique characteristics and qualities that need special
consideration. Actuaries have developed special rating processes to accurately estimate these variables. The
classifications requiring special attention discussed in this chapter are:

1) Territorial ratemaking
2) Increased limit factors (ILFs)
3) Deductibles
4) Workers compensation size of risk
5) Insurance to value (ITV)

Territorial ratemaking

Geography is one of the most well established and widely used rating variables in insurance. Challenges in
territorial ratemaking include deciding on the territorial bounds, and estimating territorial relativities since
territory tends to be highly correlated with other variables (e.g., high-value homes tend to be located in the
same areas).

The first step in defining territorial bounds is to decide on a basic geographical unit (e.g., zip/postal code, or
census blocks) which will be grouped into larger territories. Once the basic geographical units are defined we
need to estimate the geographic effects of each on expected losses. This can be difficult since losses by basic
unit tend to be volatile and highly correlated with other variables. As such, univariate methods do not work
with geography due to the correlated exposures and inability to separate the signal from the noise.

We only want to pick up the effects on loss due to geography, so we should use a multivariate model as it will
account for exposure correlations and do a better job of separating the signal from the noise in the volatile loss
data. The multivariate results for the geographical variables will be picking up only geographical differences,
as we want (assuming all other variables are included in the model). We can supplement our geographical
analysis by including third-party non-insurance data such as average rainfall or temperature data—the various
geographical characteristic estimates can be combined into one composite risk level for each geographical unit.

At this point we will have an estimate for the geographical effects on loss for each of our basic geographical
units (e.g., zip codes or census blocks). We know geological risks tend to be similar for units that are close to
each other, and the resulting estimates from the prior steps may still be volatile, so we may want to use special
smoothing techniques to improve our estimates by using information from neighboring units. One option is
distance-based smoothing which smooths between units by assigning credibility to unit’s other data based on
the distance between the units. Distance based smoothing has the advantage of being easy to understand and
implement, but has the disadvantage of assuming distance has the same effect in rural and urban areas, and
ignores any natural or artificial boundaries (e.g., rivers or highways). Adjacency based smoothing is similar,
but it assigns credibility based on the adjacency of units, regardless of distance. Adjacency base smoothing
better handles urban and rural differences, and better accounts for natural or artificial boundaries. We need to
take care not to over smooth (flatten out differences too much) or under smooth (leave too much variation in
estimates). These smoothing techniques can be applied either to the results (output) of our multivariate
geographical model, or to the geographical residuals within our multivariate analysis, thus picking up any
signal missed by the model.

Now we have a smoothed estimate for each geographic unit. We then want to group these small units into
territories (done historically; some companies now rate geographically at a very refined level). The goal is to
group units into territories that have similar risks within each territory (homogeneity with in groups) while
maximizing differences between groups (heterogeneity between groups). There are clustering techniques that
achieve these goals by grouping units into territories with similar estimates, or with equal numbers of

Spring 2020 © James Bedford, FCAS Page 201


Basic Ratemaking Chapter 11: Special Classification

observations (or exposures). The resulting territories will not necessarily be continuous (adjacent to each
other)—if we desire contiguity in our territories we need to add constraints to our clustering routine. Absent
a natural or artificial boundary, we would expect gradual changes between adjacent territories. If we create too
few territories (clusters) there may be significant jumps between our territories—we want to select the
number of territories (clusters) that minimize noise in our estimate without creating large discontinuities.

At this point we have created optimal territories. Now we can include the final territories in our final rating
model. In practice this will likely be a GLM, which will naturally account for the exposure correlations between
our final territories and other rating variables (again, these correlations make using univariate methods
inappropriate).

Increased limit factors (ILFs)

Lines of business that provide third party liability coverage are usually offered with different amounts of
coverage—called limits. The lowest limit is generally called the basic limit, while higher limits are called
increased limits. Estimating rate differentials for increased limits is becoming more important because:

1) As individual wealth grows, individuals need more insurance coverage.


2) General monetary inflation increases costs, and inflation has a leveraged effect on higher limits (as
discussed in Chapter 6).
3) Litigation and jury awards have increased (social inflation), which like general inflation, has a
leveraged effect on higher limits.

Lines of business where increased limit pricing is of particular importance include:

1) Automobile insurance (both personal and commercial)


2) Umbrella
3) Commercial liability (e.g., contractor’s liability, professional liability, etc.)

Limits can either be single limit, or split limit. An example of a split limit is 100,000/300,000, where the limit
for a single claim is 100,000 while the limit for the entire policy is 300,000 (in the case there are multiple
claims)—this is called an occurrence/aggregate limit. Split limits can also be claimant/claim, where the lower
limit applies to each claimant, while the higher limit applies to the entire claim. Pricing split limits is beyond
the scope of the exam—we will focus on single limits.

We can estimate ILFs within a multivariate rating model, but losses in higher layers are typically less common
and very volatile, leading to volatile estimates. As such, actuaries have developed special techniques to estimate
ILFs. The following is the univariate pure premium method with some additional assumptions and
enhancements (i.e., adding losses by layer as to use all available data).

An increased limits factor is a rating relativity like any other rating variable. Assuming a base rate (at the
basic/lowest limit) we can define the rate at limit H as:

= ∗ ( )

Assuming all underwriting expenses are variable, and that variable expenses and profit provisions do not vary
by limit, we can derive the ILF rate differentials as we did in Chapter 9 using the pure premium approach:

[ + ]
( )= =
[ + ]

We may also assume that frequency and severity are independent, so:


( )=

Spring 2020 © James Bedford, FCAS Page 202


Basic Ratemaking Chapter 11: Special Classification

Finally, we make the assumption that frequency is the same for each limit, causing it to cancel out of our
equation. Also, severity limited at H is often referred to as the limited average severity (LAS), resulting in the
final simplified ILF formula:

( )
( )= =
( )

In reality, frequency often does vary by limit due to the self-selection. For example, in personal auto, higher
limits tend to have lower frequencies. This is possibly due to more risk adverse individuals selecting higher
limits, with this risk aversion also present in their driving habits (I would theorize that those who select higher
limits will generally be individuals with more wealth to protect, who as compared to the whole population, will
be older and safer drivers). Additionally, we will often want to vary the profit provision by limit, pricing to a
higher profit level for higher limits as to account for the additional volatility and risk we are undertaking by
writing these higher limit policies (to reflect this difference we would need to go back to the long form of the
pure premium classification method—but that isn’t discussed in the text).

If we know the ground-up uncensored (unlimited) losses, calculating the LAS at each level is relatively
straightforward—we limit each claim at the appropriate level and calculate the average severity. The following
table gives loss information grouped by size of loss, assuming $50,000 is the basic limit. When calculating the
$50,000 LAS, we know each claim in the $0-$50,000 bucket is less than $50,000 so no limiting is necessary,
while each claim in the higher buckets are higher than $50,000 so we limit each to $50,000 by taking $50,000
multiplied by the counts. The $50,000 LAS is then the sum of the limited losses divided by counts, or $39,448.
Calculating the $250,000 LAS is similar, for the first three buckets we know each claim is less than $250,000 so
no limiting is necessary, while each claim in the two highest buckets is greater than $250,000 so we take
$250,000 times the number of counts. The sum of the limited severities is divided by the number of counts to
find an LAS of $83,498. The $250,000 ILF is then $83,498/$39,448=2.117 (relative to the base limit of $50,000).
The other LAS and ILFs are calculated similarly.

Ground-Up
Reported Reported
Size of Loss Claims Losses Limited to Limited to Limited to Limited to
(1) (2) (3) $50,000 $100,000 $250,000 $500,000
$0 - $50,000 3,511 $93,445,209 $93,445,209 $93,445,209 $93,445,209 $93,445,209
$50,000 - $100,000 2,469 $192,479,194 $123,450,000 $192,479,194 $192,479,194 $192,479,194
$100,000 - $250,000 1,195 $212,276,837 $59,750,000 $119,500,000 $212,276,837 $212,276,837
$250,000 - $500,000 502 $190,605,068 $25,100,000 $50,200,000 $125,500,000 $190,605,068
$500,000 Above 104 $102,530,792 $5,200,000 $10,400,000 $26,000,000 $52,000,000
Total 7,781 $791,337,100 $306,945,209 $466,024,403 $649,701,241 $740,806,308

Counts 7,781 7,781 7,781 7,781


LAS $39,448 $59,893 $83,498 $95,207

ILF ($100,000) = $59,893 / $39,448 = 1.518


ILF ($250,000) = $83,498 / $39,448 = 2.117
ILF ($500,000) = $95,207 / $39,448 = 2.413

If losses are censored above the policy limits (e.g., the claim database only includes losses up to the limit thus
the full amount of loss is not captured) the calculation isn’t as straightforward. We cannot use claim data from
policies with limits lower than the limit we are evaluating (or the top of the layer being evaluated, as you will
see in a moment). Assume we are given the following loss data split by policy limit (i.e., losses incurred in each
bucket, censored by the limit of the underlying policy).

Spring 2020 © James Bedford, FCAS Page 203


Basic Ratemaking Chapter 11: Special Classification

Size of Loss $50,000 Limit $100,000 Limit $250,000 Limit


(1) Claims Losses Claims Losses Claims Losses
$0 - $50,000 149 $4,864,999 331 $8,336,235 375 $9,804,375
$50,000 - $100,000 221 $17,270,045 273 $20,515,404
$100,000 - $250,000 151 $29,468,254
Total 149 $4,864,999 552 $25,606,280 799 $59,788,033

To calculate the $50,000 LAS we can use all of the data given. We sum the amounts less than $50,000 and add
$50,000 times the number of counts greater than $50,000, divided by the total counts.

4,864,999 + 8,336,235 + 9,804,375 + (221 + 273 + 151) ∗ 50,000 55,255,609


LAS ($50k) = = = $36,837
149 + 552 + 799 1,500

To calculate the $100,000 LAS we cannot use data from policies with a $50,000 limit since we cannot see losses
that occurred between $50,000 and $100,000. Your first instinct might be to use only data censored at $100,000
and above, but in doing so we would be missing out on a lot of information from the losses at lower layers. To
maximize the information gained from the data, we will calculate the LAS in layers, starting with the lowest,
and add them together to find higher LAS. For example, to calculate the $100,000 LAS using the data above, we
would first calculate the $50,000 LAS (as done above), then calculate the LAS in the layer $50,000-$100,000
and add it to the $50,000 LAS— thus retaining all of the info from policies with $50,000 limits, while developing
the additional layer’s estimate using all available data for that layer.

When calculating the LAS in the layer between $50,000 and $100,000 we have 1,351 (552+799) claims in our
usable data set, since we cannot use the 149 claims from the policies with a $50,000 limit. We want the LAS for
the layer between $50,000 and $100,000 (not ground-up), so we must remove the first $50,000 from every
claim in the layer. That means the 706 (331+375) claims in the first row equal $0. The $50,000 to $100,000
group obviously includes losses above $50,000, but we must remove the first $50,000 from each claim, so we
subtract $50,000 times the counts in that layer. The 151 losses in the $100,000 to $250,000 size group also
contributed to the $50,000 to $100,000 layer, but each has pierced the top of the layer, thus each contributes
$50,000 (the width of the layer) to the layer—this is the last term below. We divide by the total counts
(excluding the policies with a $50,000 limit since they are not in our usable data set for this layer since they are
censored) to find the LAS, because we want the unconditional average contribution to the layer from all claims:

17,270,045 + 20,515,404 − (221 + 273) ∗ 50,000 + 151 ∗ 50,000


LAS($50k − $100k) = = $15,274
552 + 799
If we had divided by only the counts in the layer, we would have had a conditional expected value (conditional
on being in the layer), but we need an unconditional overall expected value for claims in the layer (as we
calculated above). Calculating the layers LAS is easiest as done above, but if we had calculated the conditional
expected value (or are given it on the exam) we can find the unconditional expected value by multiplying the
conditional expected value by the probability of being in that layer:

(221+273+151) / (331+375+221+273 +151) = 645/1,351 = 47.7%

As can be seen from the original solution, for this calculation we want 1,351 in the denominator, but the
conditional LAS would have had 645 in the denominator (dividing by just the claims that are above $50,000),
so we would multiply the conditional expected value by 645 to cancel out the denominator, while dividing by
1,351 to get the denominator we want, resulting in the unconditional expected value.

Now that we have the LAS of the basic level ($50,000) and the LAS of the next layer ($50,000-$100,000)— we
can find the $100,000 LAS as the sum of the two:

LAS($100k) = LAS($50k) + LAS($50k − $100k) = $36,837 + $15,274 = $52,111

Similarly, to find the $250,000 LAS as we must first find the LAS in the layer between $100,000 and $250,000.
Only data with a $250,00 policy limit can be used since we cannot use claims that are censored below the top

Spring 2020 © James Bedford, FCAS Page 204


Basic Ratemaking Chapter 11: Special Classification

of the layer—therefore we have 799 claims in our useable dataset. Of those claims, only 151 are in the layer—
none of which pierce the top of the layer, so we include the full amount, minus the first $100,000 of each claim.
The LAS in this layer is then:

29,468,254 − 151 ∗ 100,000


LAS($100k − $250k) = = $17,983
799
The same comments apply as before with regards to the conditional versus unconditional LAS. Since we divided
by all of the claims in our usable dataset we have the unconditional LAS, as we want. We can then find the
$250,000 LAS as:

LAS($250k) = LAS($100k) + LAS($100k − $250k) = $52,111 + $17,983 = $70,094

The ILFs for $100,000 and $250,000, with a $50,000 basic limit, are then:

ILF($100,000) = LAS($100k)/LAS($50k) = 52,111/36,837 = 1.415

ILF($250,000) = LAS($250k)/LAS($50k) = 70,094/36,837 = 1.903

We have now seen how to calculate LAS and ILFs when we have ground-up uncensored data and data censored
by the policy limits.

We should note that we are estimating ILFs for the prospective policy period, so we want to use historical losses
developed and trended to the prospective period. As discussed in Chapter 6, the leveraged effect of trend at
higher limits makes using trended ultimate losses especially important when calculating ILFs. Assuming a
positive trend rate, the trend experienced in higher limits will be more than in lower limits.

Since losses in higher limits may be volatile, we may instead choose to fit a severity distribution (e.g., Pareto,
Lognormal, Truncated Pareto) to our losses. Once we have our severity distribution, ( ), we can find the
limited average severity using the following formula, where the first term is the expected severity for all claims
less than the limit and the second is the limit multiplied by the probability of exceeding the limit.

( )= ( ) + ( )

We can then calculate the ILFs as:

∫ ( ) + ∫ ( )
( )=
∫ ( ) + ∫ ( )

As stated earlier, we can estimate ILFs within our GLM model as well, but results may be volatile due to sparse
loss data in higher limits. The GLM estimate will not assume that the frequency is the same for all risks, as the
univariate model does—therefore the GLM model will be influenced by the behavior of insureds, which may
result in counterintuitive results. We may want to calculate ILFs both ways, then make final selections.

Spring 2020 © James Bedford, FCAS Page 205


Basic Ratemaking Chapter 11: Special Classification

Deductibles

In the case of a deductible, the insured is responsible for the first dollars of loss, up to the deductible. Four
primary reasons deductibles are popular among both insurers and insureds are:

1) They reduce premium by transferring some of the risk from the insurance company to the insured.
2) They eliminate many small nuisance claims—with no deductible insureds have an incentive to report
all small claims, driving up the insurance company’s loss adjustment expenses (a small claim will often
result in more ALAE than loss payment).
3) They provide incentives for the insured to control losses.
4) They allow the insurance company to control catastrophic exposure—the insurance company may
require a large catastrophe deductible in high risk areas.

The two primary types of deductibles are flat dollar deductibles (e.g., $5,000) and percentage deductibles (e.g.,
5.0%). In personal lines, deductibles are generally low, but commercial lines deductibles can be rather large.

We can estimate deductible relativities using multivariate models, or the use the loss elimination ratio (LER)
approach covered next. Assuming all expenses are variable, and that variable expenses and profit provisions
do not vary by deductible, we can derive the deductible rate differentials as we did in Chapter 9 using the pure
premium approach:

[ + ] ℎ
= =
[ + ] ℎ

In this case, we assume the base level is no deductible (i.e., total ground-up loss & LAE). In the LER approach,
we calculate the amount of loss and LAE eliminated by the deductible:

& [ + ] −[ + ]
( )= =
& [ + ]

re-arranging variables:

( )∗[ + ] = [ + ] −[ + ]

[ + ] = [ + ] − ( )∗[ + ] = [ + ] ∗ (1 − ( ))

Plugging his back into the indicated deductible relativity formula from above:

[ + ] ∗ (1 − ( ))
= = 1− ( )
[ + ]

Which should be intuitive, since the more loss eliminated by the deductible, the lower the relativity deductible
will be.

If ground-up (first dollar) losses are known for all claims, calculating the LER and thus the indicated deductible
relativity is straightforward. The text gives an odd formula to calculate the LER from ground up dollars—you
would be rather unlikely to use the formula as given in the text directly, but I will put some intuition behind it
and re-arrange it into the form we would actually use (and what they used in their example). The text formula
for LER is given as:

∑ [0, ( − )]
( )= 1.0 −

In words, this says the loss elimination ratio is equal to one minus the ratio of loss dollars not eliminated by the
deductible divided by total losses—which is certainly true, but not how we will solve most problems. More

Spring 2020 © James Bedford, FCAS Page 206


Basic Ratemaking Chapter 11: Special Classification

intuitively, we will generally solve for the LER simply as the dollars eliminated by the deductible divided by
total losses (as stated in the first LER formula a couple paragraphs back).

Keep in mind, the losses eliminated by the deductible are akin to the losses below a limit—the difference being
who pays for each piece. With a loss limit the insurance company pays losses from $0 up to the limit while the
insured retains anything excess the limit. With a deductible the insured covers losses from $0 up to the
deductible while the insurance company covers anything excess the deductible. As such, calculating the LER is
very similar to calculating the LAS (except when calculating the LER we don’t split out frequency and severity
then cancel frequency out by assuming it doesn’t vary my limit, thus we use total loss dollars, not severities).

The following example assumes we know the ground-up (unlimited) losses, summarized by size of loss in the
following table. The $500 deductible eliminates all losses below $500, thus the first three rows of Col (4) are
simply equal to Col (3). For losses greater than $500, the $500 deductible simply eliminates the first $500 of
loss. Therefore Col (4) is equal to $500 times the number of reported claims in each of the layers greater than
$500. We can then sum of the total losses eliminated by the $500 deductible, and divide it by ground-up losses
to find the LER. We calculate an LER of 22.8% for the $500 deductible—therefore the $500 deductible rate
relativity is 1.000-0.228 = 0.772. I purposely used the same data as the text example, so we could compare our
calculated $500 LER to the $250 LER calculated by the text. The text found the $250 LER to be 13.5% implying
a 0.865 rate relativity, which is consistent with our calculation (we would expect the LER to be smaller for
smaller deductibles).
Ground-Up Losses
Reported Reported Eliminated
Size of Loss Claims Losses by $500 Ded.
(1) (2) (3) (4)
$0 - $100 3,200 $240,365 $240,365
$100 - $250 1,225 $207,588 $207,588
$250 - $500 1,187 $463,954 $463,954
$500 - $1,000 1,845 $1,551,938 $922,500
$1,000 - Above 2,543 $11,140,545 $1,271,500
Total 10,000 $13,604,390 $3,105,907
LER 22.8%
In practice, we are unlikely to have ground-up data as above (except from our $0 deductible policies), our data
will be censored from below (e.g., if we have a $5,000 deductible we will never see the losses occurring below
$5,000 which might have otherwise been reported with a lower deductible). This means when we are
evaluating a deductible, we can only use data from policies with the same or lower deductible. The text uses
the term net losses to represent losses net (after application) of the deductible.
The following example shows the LER calculation for moving from a $500 deductible to a $1,000 deductible.
The following data table shows the net reported losses assuming the given deductible (x-axis) from each actual
deductible level (y-axis). As such, the losses net of the given deductible (x-axis) are only calculable for amounts
greater than or equal to the actual deductible (y-axis). We first calculate the amount of losses eliminated by
moving from a $500 deductible to a $1,000 deductible (i.e., net losses with a $500 deductible minus net losses
with a $1,000 deductible, using data from policies with actual deductibles ≤ $500). We then divide with amount
by the total dollars at the initial deductible, $500 in this case, resulting in 14.4% of the losses being removed
due to the increase in deductible. We can add this LER to the $500 LER to find the total $1,000 deductible LER.
We can calculate the LER between any of the deductibles below, making sure to use the appropriate data (e.g.,
only data from policies with deductibles equal to or less than the smallest deductible in the layer being
analyzed).

Spring 2020 © James Bedford, FCAS Page 207


Basic Ratemaking Chapter 11: Special Classification

From Losses
Policies with Net Reported Losses Assuming: Eliminated
Deductible $0 Ded. $100 Ded. $250 Ded. $500 Ded. $750 Ded. $1,000 Ded. $500 to $1,000
$0 $950,250 $836,220 $781,029 $697,069 $641,303 $596,412 $100,657
$100 $1,368,360 $1,278,048 $1,140,658 $1,049,405 $975,947 $164,711
$250 $2,364,526 $2,110,340 $1,941,512 $1,805,607 $304,733
$500 $2,942,032 $2,706,669 $2,517,203 $424,829
$750 $1,570,438 $1,460,507 na
$1,000 $925,804 na
Losses Eliminated moving from $500 to $1,000 Deductible: (a) $994,930
Total Losses at $500 Deductible: (b) $6,890,098
LER = (a) / (b) 14.4%

We can also calculate the LER by fitting a distribution to expected losses (not severity as done in the ILF
method). Once we have our loss distribution, ( ), we can find the expected losses eliminated by the deductible
using the following formula. The first term is the expected loss amount for all claims less than the deductible
and the second is the deductible multiplied by the probability of exceeding the deductible.

( ) + ( )

The LER is then the losses eliminated by the deductible (above) divided by total expected losses:

∫ ( ) + ∫ ( )
( )=
∫ ( )

Then, as always, the deductible relativity for each level D is 1.0 – LER(D).

The LER approach assumes behavior will be the same at each deductible, but in practice the deductible may
affect insured behavior. For example, an insured with a $100 deductible would likely file a claim for a $2,100
loss, but an insured with a $2,000 deductible may not as they may assume the $100 claim payment would not
be worth any resulting premium increase. Additionally, if insureds are allowed to select their own deductible,
lower risk insureds tend to select higher deductibles, likely because they realize they are less likely to have a
claim and have the means and are willing to accept the risk that comes with a higher deductible. Multivariate
methods will reflect these behavioral differences—we should compare both the multivariate and LER indicated
relativities when selecting the final deductible relativities.

When setting deductible relativities we must keep in mind the practical effects as well. For example, if our $500
deductible relativity is 0.80, an individual with a $3,000 premium and no deductible would save $600 by going
to a $500 deductible—a savings of more than the deductible itself. Therefore, the insured would be better off
with the higher deductible unless they expect to have multiple claims. As a result, the insurance company may
set a cap on the dollar credit for moving deductibles (e.g., max credit for moving from $0 to $500 deductible is
$300), or vary the deductible relativities by size of risk. Additionally, a percentage deductible will naturally
address this issue.

Spring 2020 © James Bedford, FCAS Page 208


Basic Ratemaking Chapter 11: Special Classification

Workers compensation size of risk

Workers compensation has a very simple rating algorithm (with refinements made based on experience, as
covered in Chapter 15). One item not accounted for is the size of the risk written—in practice both expected
losses and expenses can vary by the size of the insured.

Expenses

As mentioned in Chapter 7 on expenses, commercial lines often assume all underwriting expenses are variable,
but in reality some portion of the underwriting expenses will be fixed. Assuming all expenses are variable will
overstate the premium needed for large policies and understate the premium need for small policies (since all
expenses in the ratings process will vary directly proportional to premium, rather than some portion being
fixed). Companies can correct for this premium inequity in the following ways:

1) Calculate a variable expense provision that applies only to the first $5,000 of premium, thus charging
those policies with premium greater than $5,000 a lower percentage for variable expenses.
2) Include an expense constant for the expenses that do not vary by the size of risk (e.g., administrative
expenses). This has the same effect of including a fixed expense provision thus reducing the percentage
of premium charged for lager risks.
3) Apply a premium discount to policies with premium above certain amounts (as shown in the following
example).

To illustrate a premium discount calculation, we will use the information provided in the following table to
calculate the appropriate discount to apply to each layer of premium, Col (8), and the premium discount in
dollars, Col (9). We will examine a risk that has $600,000 in premium prior to premium discounts—Col (1)
splits this premium into each layer. The expenses are shown in Col (2) – (5)—the acquisition expenses
(including both commission/brokerage and other acquisition) and general expenses decrease as a percentage
of premium for larger risks, while taxes and profit are constant. Col (6) is the sum of the expenses in Col (2) -
(5). Col (7) is the amount expenses are lower in the given layer than in the first layer. Col (8) is the discount
percentage, and is calculated as the expense reduction divided by 1.0 minus the taxes and profit—this is done
to reflect the fact when the other expenses are reduced, the fully variable items associated with those expenses
(i.e., taxes and profit) are reduced too. The dollar discount is then the product of the discount percentage in
each layer and premium in each layer. The final premium is then $523,333 (the original premium less the
discount), with overall discount of 12.78%.

Premium Acquisition General Total Expense Discount Premium


Premium Range in Range Expenses Expense Taxes Profit Expenses Reduction % Discount
(1) (2) (3) (4) (5) (6) (7) (8) (9)
$0 - $10,000 $10,000 15.0% 12.0% 4.0% 6.0% 37.0% 0.0% 0.0% $0
$10,000 - $50,000 $40,000 12.0% 10.0% 4.0% 6.0% 32.0% 5.0% 5.6% $2,222
$50,000 - $150,000 $100,000 10.0% 8.0% 4.0% 6.0% 28.0% 9.0% 10.0% $10,000
$150,000 - $500,000 $350,000 9.0% 6.0% 4.0% 6.0% 25.0% 12.0% 13.3% $46,667
$500,000 - above $100,000 7.0% 4.0% 4.0% 6.0% 21.0% 16.0% 17.8% $17,778
Total $76,667
(6) = Sum (2) - (5)
(7) = (6) Base - (6)
(8) = (7) / [1.0- (4) - (5)]
(9) = (1) * (8)

Losses

In addition to properly adjusting expenses for size of risk, we must also reflect differences in expected losses.
Small workers compensation risks tend to have worse experience (on a relative basis) than larger risks. Three
reasons this might be the case are:

1) Small companies generally have less sophisticated safety programs than larger companies.

Spring 2020 © James Bedford, FCAS Page 209


Basic Ratemaking Chapter 11: Special Classification

2) Small companies often do not have programs that help injured workers return to work.
3) The premium for small insureds is generally minimally or unaffected by experience rating (as covered
in Chapter 15). As such, they do not have the same financial incentives to prevent and control injuries
as large insureds.

Due to these reasons, if we charge the same loss rate per exposure for small and large risks, the premium may
be inadequate for small risks and excessive for large risks. This can be corrected for by including a loss constant
to the premium of small insureds.

To illustrate this concept we will examine three different risk sizes. The data in the following table shows that
the initial loss ratio (Col (4)) is higher for the smaller risks. Assuming we have a target loss ratio of 65.0%, we
can calculate the additional premium needed to meet this target. The formula for the premium shortfall is the
reported losses divided by the target loss ratio (this equals the premium needed to meet the target loss ratio)
minus actual premium. We can then find the loss constant we would need to add to each exposure within each
layer as the total premium shortfall divided by the number of policies in each layer. (Some insurers have
accounted for size of risk in their rating models, in such cases the loss constant is not needed).

Reported Initial Target Premium Loss


Premium Range Policies Premium Losses Loss Ratio Loss Ratio Shortfall Constant
(1) (2) (3) (4) (5) (6) (7)
$0 - $2,000 2,000 $2,000,000 $1,440,000 72.0% 65.0% $215,385 $108
$2,000 - $10,000 2,000 $12,000,000 $7,920,000 66.0% 65.0% $184,615 $92
$10,000 - above 2,000 $44,000,000 $28,600,000 65.0% 65.0% $0 $0

(1) - (3) & (5)= Given


(4) = (3) / (2)
(6) = (3) / (5) - (2) = Premium needed to equal target loss ratio - actual premium
(7) = (6) / (1)

Insurance to value (ITV)

In many property lines of business the insured chooses the amount of insurance (AOI), which is the maximum
amount paid out by the insurer in the event of total loss. Often the insured will select an AOI equal to the
replacement value of the item insured (e.g., if the replacement value of your home is $350,000 you often choose
an AOI of $350,000). If someone else chooses an AOI less than the value of the underlying property there will
be inequalities in our rates (without further adjustment).

To illustrate this point, imagine two insureds chose an AOI of $800 for their property, but one item is valued at
$1,000 while the other is valued at $800. Also, assume a uniform loss distribution for each item. Borrowing a
graph from Exam 8, we can visualize the expected losses as the area under each curve, limited at the AOI (if the
expected losses being equal to the area under this curve is new to you, don’t worry about it as it is not on this
exam—it is called Stieltjes integration, and is covered on Exam 8). As we can see, the expected losses for the
item valued at $1,000 (red line) is quite a bit larger than the expected losses from the item valued at $800 (blue
line, despite each item being insured for up to the same $800 (dashed horizontal line). This is because the
$1,000 item has a 20% chance (assuming a uniform distribution) of $800 total loss, and has a higher loss
amount all along the loss distribution.

Spring 2020 © James Bedford, FCAS Page 210


Basic Ratemaking Chapter 11: Special Classification

As an extreme example, assume a $10M property has an AOI of $800—virtually every loss will be for the $800
limit—which will obviously result in much higher expected losses than an $800 item insured for $800.

This shows two problems with underinsurance:

1) The insurance payments will not be sufficient to cover the full loss amount 20% of the time—the
insured is essentially self-insuring this high layer.
2) If the insurance company assumes all homes are insured to full value then premium will not be
sufficient to cover the expected losses of the underinsured home, resulting in inequitable rates.

If all homes are underinsured by the same amount the rates would be equitable, and the base rate would adjust
to the level of the expected losses. Rates would then be based on the average ITV percentage, but would still be
inadequate to insured with above average ITV percentages.

Coinsurance clause

Coinsurance is when the insured retains some percent of the claim. The insurance company may include a
coinsurance clause that requires the property to be insured to a minimum ITV (e.g., 80%) or else payments will
be reduced proportionately by the amount of underinsurance (the insured essentially coinsures that piece)—
this is called the coinsurance penalty.

We will use the following notation for the coinsurance calculation:

I = indemnity received after loss (loss payment to insured)


L = amount of loss (after deductible)
F = face value of the policy (i.e., amount of insurance (AOI) selected)
V = value of the property
c = minimum required ITV (coinsurance percentage)
a = apportionment ratio (% of loss covered by insurer, up to F)
e = coinsurance penalty

Using the notation above, the required amount of insurance is the required ITV (c; a.k.a. “coinsurance
percentage”) times the value of the property (V), or cV. The apportionment ratio is then the selected face value
of the policy (F; or selected AOI) divided by the required AOI (cV), capped at 1.0—this is the percentage of loss
covered by the insurance company, up to the face value of the policy.

Spring 2020 © James Bedford, FCAS Page 211


Basic Ratemaking Chapter 11: Special Classification

= min , 1.0

To better understand the intuition behind the coinsurance penalty, lets look at basic example. Assuming there
is no coinsurance penalty, the indemnification (I) amount is simply equal to the amount of loss after any
applicable deductible (L), limited to the face value of the policy (F). This is represented by the blue line in the
following graph, where I*=L, limited to F, where I* represents what indemnification would be without the
coinsurance penalty.

If there is a coinsurance penalty, the amount of indemnification is equal to the appropriation ratio (a) times the
amount of loss after deductible (L), limited to the face value of the policy (F):

= ∗ ,

This is represented by the orange line in the following graph. The coinsurance penalty (e) is then the vertical
area between the two lines—the difference in indemnity payments. (This graph is not given in the text, but it
provides the intuition needed to understand the second graph which is in the text).

We can see that the coinsurance penalty (e) varies by range and reaches a maximum when L = F:

if L ≤ F e=L-I
if F < L < cV e=F-I
if L ≥ cV e=0

where L and F are the indemnification amounts without the penalty (blue line) and I is the actual
indemnification payments with the penalty (I=L*a; orange line)

Which we can simplify to:

= ( , )−

since the indemnification without penalty (top line) is equal to the minimum of L and F, while the bottom line
is equal to I. They are equal once L ≥ cV, so e = 0.

The graph that is shown in the text shows the coinsurance penalty (e) in the Y-axis and L on the x-axis—this is
simply the first graph transformed down. The coinsurance penalty increases from $0 at $0 loss, to a maximum
of F(1-a) then the loss equals the face value of the policy—the slope of the line between these two points is 1-a
(i.e., the difference between the slopes of the two lines in the first graph). Once the loss amount surpasses the
face value of the policy, it quickly reduces to 0 at the required AOI (cV)—the slope is -a in this portion of the
graph.

Spring 2020 © James Bedford, FCAS Page 212


Basic Ratemaking Chapter 11: Special Classification

Varying rates by ITV level

The coinsurance penalty corrects for inequalities by imposing a reduction in payment in the event of loss—
alternatively we could vary rates by ITV level (i.e., include as a rating variable with higher relativities for lower
ITV levels).

If we have an expected frequency (freq.) and severity distribution f(x) we can calculate the rate per AOI (= face
value = F) as:

.∗ ∫ ( ) + ∫ ( )
=

similarly, for empirical data:

.∗ (∑ ( )+ ∑ ( ))
=

If small losses dominate (right skewed loss distribution) the rate will decrease at a decreasing rate as the face
value increases. If the loss distribution is uniformly distributed the rate will decrease at the same rate the face
value increases. If large losses dominate (left skewed loss distribution) the rate will decrease at an increasing
rate as the face value increases.

ITV initiatives

Homeowners policies typically settle losses based on replacement cost. To encourage insurance to full value,
insurance companies may offer guaranteed replacement cost, allowing payments greater than the policy limits
if fully insured. This comes into effect if there is a demand surge in building costs, such as after a hurricane or
large wildfire (i.e., replacement cost may be $300,00 under normal conditions, but if there are many homes
needing repair, the replacement cost may surge to $350,000, for example, due to the increased demand).

In addition, technological advances have allowed insurers to more accurately estimate the replacement value
of a property, allowing them to require that the insured purchase a higher AOI (face value).

By increasing the AOI on underinsured properties to the level assumed in the rates the insurance company is
able to generate additional premium without increasing (overall) rates. Insureds will be more receptive to
increases in premium due to an increase in AOI than if an overall rate increase is taken.

Spring 2020 © James Bedford, FCAS Page 213


Basic Ratemaking Chapter 12: Credibility

Basic Ratemaking

Chapter 12 – Credibility
Introduction

The data used to develop our rate indication at the aggregate level and/or class level may not be sufficient to
create fully credible estimates. The first part of this chapter will discuss measures of credibility (at a high level),
while the second part discusses options to use as the complement of credibility.

Necessary criteria for credibility

The amount of credibility given to our observed experience, denoted Z, should have the three following
characteristics:

1) 0 ≤ Z ≤ 1; the least credibility a set of data can have is 0, while the most it can have is full credibility.
2) Z should increase as the size of risk underlying the estimate increases, all else equal.
3) Z should increase at a non-increasing rate (or said another way, Z should increase at most linearly, or
most commonly at a decreasing rate).

Methods for measuring the credibility of an actuarial estimate

As defined by ASOP 25, credibility is “a measure of the predictive value in a given application that the actuary
attaches to a particular body of data”. The three most common methods to determine the amount of credibility
to assign to our data are Classical credibility, Buhlmann credibility, and Bayesian credibility.

Classical credibility

Classical credibility, also called limited fluctuation credibility or the square root rule, is the simplest of the three.
Its goal is to limit the effects of random fluctuations. The data will be considered fully credible when there is a
high probability (p) that the observed experience will not vary from the expected experience by more than our
chosen amount (k). For example, we may select our credibility standard so there is a 95% probability (p) that
the observed experience is within 5% (k) of its expected value.

Making the following simplifying assumptions:

1) Exposures are homogenous (i.e., each has the same expected frequency).
2) Claim occurrence follows a Poisson distribution (so the mean equals the variance).
3) There is no variation in severity (i.e., all losses are a constant size).

we can find the number of claims needed for full credibility, E(Y), is equal to:

( )/
( )=

where ( )/ is the two-tailed standard normal value for our selected probability level. For example, if we
select our probability level to be 95%, we would look up (0.95+1.0)/2 = 0.975 on the standard normal table,
finding ( )/ = 1.960. Continuing this example, if we want to be 95% confident our observed estimate is
within 5% of the true amount, then k = 0.05, and the number of claims needed for full credibility is equal to:

1.960
( )= = 1,537
0.05

Spring 2020 © James Bedford, FCAS Page 215


Basic Ratemaking Chapter 12: Credibility

We can then find the credibility of our data set as:

= , limited to 1.0
( )

where Y is the number of claims in our dataset, and E(Y) is the number of claims needed for full credibility at
our selected confidence and fluctuation levels. Continuing our example, if our dataset contains 750 claims, the
credibility assigned to the dataset would be:

750
= = 69.86%
1,537

If we wanted our credibility standard based on the number of exposures rather than claims, we would find this
as the number of claims needed for full credibility divided by the expected frequency.

Revisiting the simplifying assumptions of the classical credibility methodology, assumptions 1 and 3 are very
unlikely to hold, but assumption 3 is particularly silly for standard P&C lines of business. If we remove
assumption 3 we can find the number of claims needed for full credibility as:

( )/
( )= ∗ 1+ where is the coefficient of variation squared
µ µ

Classical credibility has the following three advantages:

1) It is the most commonly used method, thus it is generally accepted.


2) The data required is readily available.
3) The calculation is straightforward.

The disadvantages include that it makes several simplifying assumptions that may not be true in practice, and
the selected confidence levels used to derive the level of credibility are judgmentally selected.

Buhlmann credibility

Buhlmann credibility, also called least squares credibility, has the goal of minimizing the square error between
our observed estimate and the true expected value. Buhlmann credibility is defined as:

=
+
where N is the number of observations and K is the expected value of the process variance (EVPV) divided by
the variance of the hypothetical means (VHM). Said another way, K is the ratio of the average risk variance
divided by the variance between risks.

= =

EVPV and VHM can be difficult to calculate in practice, and their calculation is beyond the scope of the text, but
a problem could certainly give you their values and have you solve the rest.

Note that Z never fully reaches 1.0 under Buhlmann credibility, while Z does reach 1.0 at the full credibility
standard under classical credibility.

The assumptions of Buhlmann credibility are that the risk parameters and risk process do not shift with time,
and that both EVPV and VHM increase with more observations (N). It is also worth noting that if we assume

Spring 2020 © James Bedford, FCAS Page 216


Basic Ratemaking Chapter 12: Credibility

frequencies and severities are the same between risks, as the classical credibility method does, then VHM = 0
and Z = 0 under Buhlmann credibility.

Buhlmann credibility is used in practice, but not nearly as much as classical credibility, though it is still
generally accepted. The major disadvantage is that estimating EVPV and VHM can be difficult, and it is based
on some assumptions that may or may not be reasonable in a given situation.

Bayesian credibility

In Bayesian credibility we start with an a-priori distributional assumption, which is adjusted to reflect new
information—there is no specific calculation of Z. Bayesian credibility is not commonly used in practice. There
is not much to know about Bayesian credibility for the exam other than the Buhlmann estimate is the weighted
least squares line associate with the Bayesian estimate and in certain special situations they are equivalent.

Desirable qualities of a complement of credibility

Our credibility weighted estimate is the credibility weighted combination of our observed estimate (base
statistic) and the selected complement of credibility:

= ∗ + (1 − ) ∗

Our selected complement of credibility should be related and similar to the observed experience. In many
situations the credibility of the observes experience may be very low, resulting in the complement of credibility
being more important than the observed experience.

When selecting a complement of credibility we should consider the following six desirable characteristics:

1) Accurate: close to the future expected losses being estimated.


2) Unbiased: not consistently higher or lower than the observed estimate.
3) Statistically independent from the base statistic: if not independent from the observed experience any
error in the base statistic will be compounded.
4) Available: data needed to calculate the complement should be readily available.
5) Easy to compute: easy to perform and understand, especially if needing regulatory approval.
6) Logical relationship to base statistic: should have an explainable relationship with the base statistic.

Methods for developing complements of credibility

We are going to examine developing complements of credibility for first dollar insurance products (i.e., cover
the first dollar of loss, possibly after some small deductible) and excess insurance products (i.e., cover claims
above some high attachment point, subject to a limit).

First dollar complements of credibility

There are six commonly used methods for developing complements of credibility for first dollar ratemaking:

1) Loss costs of larger group that includes the group being rated
2) Loss costs of larger related group
3) Rate change from the larger group applied to present rates
4) Harwayne’s method
5) Trended present rates
6) Competitor’s rates

For the first four, the complements will be discussed in terms of pure premiums (loss costs)— you could instead
use loss ratios by replacing exposures with earned premiums. The last two complements will be for rates rather
than pure premiums.

Spring 2020 © James Bedford, FCAS Page 217


Basic Ratemaking Chapter 12: Credibility

Loss costs of larger group that includes the group being rated

Examples include using data from the entire region as the complement for a state contained in that region,
using data from all physicians as the complement for primary care physicians, or using data over a longer
timeframe as the complement for a shorter timeframe.

If we believe the larger group is different than the subject experience the complement will be biased and
inaccurate. The level of independence will depend on how large the subject experience is relative to the entire
group. The data is likely readily available, easy to compute, and as long as there is something in common
between the subject experience and the larger group than there is a logical relationship as well.

Loss costs of larger related group

Rather than using a complement that contains the subject experience we could select a separate, but similar,
large group. This method will generally be biased since the expected losses in the large group are unlikely to
be equal to the expected losses in the subject group (the magnitude and direction of the bias and inaccuracy
will vary case by case). This method will be independent since it doesn’t include the subject experience. The
needed data is likely readily available and it is easy to compute. If the groups are closely related then there will
be a logical relationship.

Rate change from the larger group (including the subject experience) applied to present rates

Since the loss costs of a larger group is likely biased we can instead apply the indicated rate change from the
larger group to the subject experience’s loss cost. The complement of credibility, C, is then:

= ∗

This complement is largely unbiased and likely accurate over the long term (assuming small rate changes). The
level of independence will depend on how large the subject experience is relative to the entire group. The data
is likely readily available, the complement is easy to calculate, and it has a logical relationship with the subject
data.

Harwayne’s method

Harwayne’s method is used when there are differences in exposure distributions and/or loss cost levels
between the subject experience and the group we want to use as the complement. The most common example
of this is using countrywide (excluding the subject state) or other state’s data as the complement to the subject
state.

Harwayne’s method finds differences in state costs levels by comparing loss costs adjusted for exposure
distribution differences (i.e., finds the loss cost of the complement states using the subject state’s exposure
distribution). We do this because true state level differences may be different than the observed values due to
differences in exposure distributions (e.g., one state has more high-risk exposures than another). This exposure
adjusted difference in state level costs is then applied to the classification of interest to find the complement
from that state. If there is more than one state in the complement we will take an exposure weighted average
of each states complement to find the overall complement.

An example may make this method more clear. The following example will find the complement of credibility
for Illinois (IL) Males, using Harwayne’s method and data from Indiana (IN), Missouri (MO), and Iowa (IA). We
first find the overall average loss costs for each of the other states, State, using the Illinois exposure distribution.
We then find the adjustment factors by state, FState, that adjusts to the Illinois level as the Illinois average loss
cost divided by the other state’s average loss costs (using the Illinois exposure distribution). These adjustment
factors are then applied to our target class, Male, by state to estimate the average loss cost for Males by state,
now reflecting differences in state cost levels. Our complement of credibility for Illinois Males is then the
exposure weighted average of the adjusted Male average loss costs by state.

Spring 2020 © James Bedford, FCAS Page 218


Basic Ratemaking Chapter 12: Credibility

Pure
State Gender Exposure Losses Premium Male,State
IL Male 250 $875 $3.50
Female 210 $650 $3.10
Subtotal 460 $1,525 $3.32
IN Male 110 $320 $2.91 $3.43
Female 130 $350 $2.69
Subtotal 240 $670 $2.79 $2.81 1.180
MO Male 95 $240 $2.53 $3.54
Female 115 $250 $2.17
Subtotal 210 $490 $2.33 $2.37 1.402
IA Male 95 $285 $3.00 3.00 ∗ 1.187 $3.56
Female 75 $191 $2.55
Subtotal 170 $476 $2.80 $2.79 1.187
$3.51 ,
(250 ∗ 3.00 + 210 ∗ 2.55)
3.32
(250 + 210) (110 ∗ 3.43 + 95 ∗ 3.54 + 95 ∗ 3.56)
2.79
(110+ 95 + 95)

A complement found using Harwayne’s method will be unbiased since it reflects distributional differences,
while the use of several states should make it reasonably accurate. The complement would also be mostly
independent since it uses other states. The data is likely readily available, but the calculation would not be
considered easy. There is a logical relationship with the subject data, but explaining it to outsiders may prove
challenging.

Trended present rates

If there is no suitable larger group to compare, we may use the current rates adjusted to reflect any rate not
taken in the prior revision and trended to the prospective period (i.e., the complement would be the prior
indicated rate trended to the future period). Note that our subject experience and complement are rates here,
not loss costs as in the prior methods. We will trend from the expected effective date in the current rates (not
necessarily equal to the actual effective date) to the prospective expected effective date. Making both
adjustments, we can find the complement of credibility to the indicated rate as:

= ∗ ∗

Alternatively, when using the loss ratio indication method, we could find the complement to the indicated
rate change as:


= ∗ − 1.0

The loss trend over the premium trend is the net loss ratio trend (i.e., if premium trend is higher than loss trend
we would expect loss ratios to go down, all else equal). For each, the rightmost term can be thought of as the
residual change (i.e., the indicated change not taken).

This method will generally be accurate assuming sufficient data. It will also be unbiased since the complement
does not reflect the most recent experience. The level of independence will depend on the amount of overlap
between the data used in the prior indication and current indication, but will generally not be independent (i.e.,
if we use 5 years in each, the complement will share 4 common years with the indicated rate). The needed data
is readily available, the calculation is easy, and there is a logical relationship that is easily explained.

Spring 2020 © James Bedford, FCAS Page 219


Basic Ratemaking Chapter 12: Credibility

Competitor’s rates

For small companies, or small lines of business, the companies own data may be so sparse that competitor’s
rates are the best option to use as the complement of credibility. If using competitor’s rates, we must consider
that there could be significant differences due to marketing considerations, judgement, underwriting, and claim
practices. The calculation is straightforward but gaining the necessary data may be difficult. Despite the
differences, competitor’s rates have a logical relationship with the subject rate. They will also be independent.

Excess ratemaking complements of credibility

When performing excess ratemaking we will often be dealing with low volumes of data and volatile estimates.
In many cases the selection of the complement of credibility is more important than the subject experience.
Typically, there are few claims in the layers being evaluated, so we will often want to predict losses in a given
layer using losses below the attachment point and increase limits factors (ILFs). In the prior section, the subject
experience and complements of credibility were expressed as loss costs or rates. This section instead uses total
losses in the layer.

The four methods discussed for developing the complement of credibility for excess losses, subject to a policy
limit, are:

1) Increased limits analysis


2) Lower limits analysis
3) Limits analysis
4) Fitted curves

For each, we will examine an excess layer beginning at an attachment point A, subject to a policy limit L. Hence
the layer of loss we are interested in is between A and A+L.

Increased limits analysis

This method is sometimes used if we have ground-up loss data through the attachment point (i.e., no truncation
below A). We take the loss dollars limited to the attachment point, LA, and adjust them to the losses we would
expect in the layer between A and A+L. We do this by first dividing by the ILF at A, resulting in an estimate of
ultimate losses at the basic limit (remember, ILFA = Losses limited to A/Losses limited to B, so Losses limited
to A divided by the ILF at A is equal to the Losses limited to B). Losses in the layer are then the product of this
basic limits loss estimate ( / ) and the ILF at the top of the layer minus the bottom of the layer. Therefore,
the formula to find losses in the layer, given losses limited to the attachment point and appropriate ILFs is:


= ∗ = ∗ − 1.0

Assume are want to estimate the excess losses between $250,000 and $750,000. We are given the ILFs below
and are told the losses limited to $250,000 are $1,500,000.

Limit ILF
$50,000 1.00
$100,000 1.50
$250,000 2.50
$500,000 3.60
$750,000 4.50
$1,000,000 5.10

Our complement of credibility for excess losses in this layer is then:

4.50
= ∗ − 1.0 = $1,500,000 ∗ − 1.0 = $1,200,000
2.50

Spring 2020 © James Bedford, FCAS Page 220


Basic Ratemaking Chapter 12: Credibility

If the subject experience has a different size of loss distribution than that used to develop the ILFs this method
will be biased and inaccurate, though it will still often be a good option to use. If using industry ILFs, the
estimate will be independent of the base statistic. To the extent the data is available, the method is easy to
calculate and is practical. The data used has a more logical relationship with losses below the attachment point,
but this method is generally accepted and widely used.

Lower limits analysis

Lower limits analysis is nearly identical to limits analysis with the difference being lower limits analysis uses
losses limited to a layer below the attachment point. We use this method if losses limited to the attachment
point are too sparse and volatile to be reliable. Using losses limited to d, we can adjust them to the level
expected in the layer between A and A+L. The logic is identical to the first method—we divide by the ILF at
the level the losses are capped to estimate losses at the basic limit, and multiply by the difference in the ILFs
at the top and bottom of the level:


= ∗

The first method is just a special case of this method where d is equal to the attachment point. If we use loses
limited to the basic limit the ILF in the denominator is 1.0 and isn’t needed (i.e., the losses are already at the
basic limit, so we don’t need to adjust them to that level).

Assuming we are told losses limited to $100,000 are $975,000, using the ILFs from the prior example we can
calculate the complement of credibility to losses in the layer $250,000 to $750,000 as:

− 4.50 − 2.50
= ∗ = $975,000 ∗ = $1,300,000
1.50

This method essentially gives more weight to the ILFs and less to the experienced losses compared to the prior
method, so whether it is more accurate or not depends on how reliable the losses are between d and A—if we
decide to use this method, presumably these losses are volatile, so this method would be more accurate. Since
we are relying more on the ILFs, this method is more susceptible to bias if the severity distribution used to
develop the ILFs differs from the severity distribution of the subject experience, though the error is generally
independent from the base statistic. The data is generally available, the calculation is easy and easily explained.
The data used has a more logical relationship with losses below the attachment point, but this method is
generally accepted and widely used.

Limits analysis

This method estimates the losses in the excess layer for actual policies in the insurers book of business (this
method is more commonly used by reinsurance companies). Since we are estimating expected losses in the
excess layer from actual policies, any policy with a limit (d) below the attachment point (A) will have $0
expected excess losses (thus we are summing only over the range d≥A in the following formula). Similarly, if d
is less than the top of the layer (A+L) actual losses for policies with limit d will be capped at d rather than A+L
(thus the min(d, A+L) in the formula below).

The logic used to estimate losses in the excess layer is similar to the prior two methods but instead of using
actual losses limited to some limit, we instead use the expected losses (premium of policies written with the
limit times the expected loss ratio)—otherwise the logic is the same. The method assumes the loss ratio (LR) is
constant across limits (d)—the expected losses limited to d is then the loss ratio multiplied by the premium for
policies with that limit (Pd). Dividing by the ILF at d results in expected losses at the basic level, which is again
multiplied by the difference in the ILFs at the top and bottom of the level:

( , ) −
= ∗ ∗

Spring 2020 © James Bedford, FCAS Page 221


Basic Ratemaking Chapter 12: Credibility

Given the following premiums written at various limits and expected loss ratio we can calculate the expected
losses in the excess layer between $250,000 and $750,000 for this book of business as follows:

Expected ILF @ ILF @ ILF @ ILF @ Expected Loss


Limit (d ) Premium Loss Ratio Limit (d ) $250k (A ) $750k (A+L ) Min(d, A+L ) in Layer
(1) (2) (3) (4) (5) (6) (7) (8)
$50,000 $7,800,000 70.0% - - - - -
$100,000 $6,500,000 70.0% - - - - -
$250,000 $4,875,000 70.0% 2.50 2.50 4.50 2.50 $0
$500,000 $2,437,500 70.0% 3.60 2.50 4.50 3.60 $521,354
$750,000 $1,625,000 70.0% 4.50 2.50 4.50 4.50 $505,556
$1,000,000 $975,000 70.0% 5.10 2.50 4.50 4.50 $267,647
$1,294,557
(8) = (2)*(3)*[(7)-(5)]/(4)

The bias and inaccuracy of this method is similar to the prior methods. It is more time consuming and
calculation intensive than the other methods, with similar other criticisms, but in the case of reinsurers it is
often the best available option (since they do not have access to the full loss distribution).

Fitted curves

Losses used in the prior methods are often very volatile, resulting in inaccuracies. An alternative is to fit a curve
to expected losses. Given the loss distribution, f(x), we can find the expected losses in the layer A, A+L as a
percentage of total losses as:

∫ ( − ) ( ) + ∫ ( )
% ( , + )=
∫ ( )

The expected losses in the excess layer (used as the complement) is then simply the expected percentage of
losses in the layer multiplied by the unlimited loss costs:

= % ( , + )∗

The numerator of the first equation above is the value of losses in the layer A, A=L. This amount is 0 when the
loss is below A—thus we start the integration at A. For losses between A and A+L, the amount of loss in the
layer is x-A, hence the value in the first integral. For losses above A+L the amount of loss in the layer is simply
L, as represented in the second integral. The denominator is simply the unlimited expected value across the
entire distribution.

If we can accurately fit a curve to expected losses this method will be accurate and mostly unbiased. Since the
curve fit uses losses in the excess layer the complement will be less independent than the other methods. The
required data may not be readily available, and the calculation is rather complex which can make
communicating it more difficult. This complement does have a strong logical relationship with the subject data.

Credibility when using statistical models (e.g., GLMs)

Generally, results of multivariate classification models are not credibility weighted. Instead the modeler will
use statistical diagnostics (e.g., Chi-Square, F-test, consistency tests) to better understand the confidence in the
estimates—taking this into consideration when selecting the final model structure and rate differentials.

Spring 2020 © James Bedford, FCAS Page 222


Basic Ratemaking Chapter 13: Other Considerations

Basic Ratemaking

Chapter 13 – Other Considerations


Introduction

The proceeding chapters discussed the techniques used to balance to fundamental insurance equation in the
prospective period so that we earn the desired target profit. There are several reasons we (or management)
may decide to charge rates other than those indicated by our analysis. The considerations discussed in this
chapter include:

1) Regulatory constraints
2) Operational constraints
3) Marketing constraints

Regulatory constraints

Above all else, the rates we charge must comply with regulations and applicable laws, regardless of what is
indicated. The amount of regulatory scrutiny varies significantly by jurisdiction and insurance product.
Regulation is generally done at the State level in the U.S. and Province level in Canada. Personal lines of
insurance such as personal automobile or homeowners, as well as workers compensation, are generally highly
regulated. In general, commercial lines of insurance are less regulated because they are generally purchased
by more sophisticated buyers (i.e., large commercial risks will often have knowledgeable risk managers),
especially if the coverage is not compulsory (i.e., required by law).

In most jurisdictions proposed rates are required to be filed with the regulatory body, but the required timing
may vary by jurisdiction and line of business—in some cases the insurance company must receive approval
before using the proposed rates, while in some cases they must simply provide the rates so they are on file with
the regulator. The U.K generally has a less rigid regulatory environment than the U.S. and Canada, instead
relying more on competitive pressures to regulate the market—though they still do place some restrictions on
the use of certain rating variables (e.g., requiring that gender relativities do not deviate much from indicated).
In Latin America regulation is more focused on rate adequacy—Latin American rating plans are generally less
sophisticated.

Some examples of U.S. regulatory constraints include placing limits on rate changes (either overall or by
individual), requiring certain actions if implementing large rate changes (e.g., hold a public hearing or requiring
notifications be sent to insureds), prohibit certain rating characteristics (e.g., credit data), or the use of certain
ratemaking techniques (e.g., California requiring sequential analysis for personal auto). In addition, there may
be times when the actuary and the regulator do not agree on assumptions (e.g., trend selections)—these
differences must be reconciled and any delay in implementation could be costly.

Regardless of the rate indication, we must comply with the state regulations—fortunately regulations generally
apply uniformly across all companies, so no one is at an advantage or disadvantage. If a company feels
regulatory restrictions are unfair they can take legal action, revise underwriting guidelines to limit business
from classes with inadequate rates, change marketing plans to avoid business from classes with inadequate
rates, or use proxies for banned or restricted rating variables (be careful with this one!).

Operational constraints

In some cases, implementing the actuarially indicated rates could be difficult or undesirable due to operational
constraints such as systems limitations (e.g., adding new variables could be very costly or not possible in a
given system, in addition collecting data on the new variable could be difficult) or resource constraints (e.g.,
the need for skilled inspectors to confirm the presence of a new rating characteristic in a property). The
standard actuarial ratemaking analysis does not account for the costs incurred in implementing the
recommended changes. In the presence of restrictive costs, we must undertake a cost benefit analysis to decide
if implementing a certain change is worth the cost.

Spring 2020 © James Bedford, FCAS Page 223


Basic Ratemaking Chapter 13: Other Considerations

Marketing constraints

Just as the standard actuarial ratemaking analysis does not account for operational costs needed to implement
changes, it also does not consider the companies ability to sell the product. We must also consider the demand
for our product, which will decrease as the price increases. Our goal should be to maximize profit—profit will
generally increase with price until we reach the point that the impact of lost business outweighs the benefit of
higher prices.

When evaluating marketing considerations, we will generally evaluate new business (i.e., potential customers
that are uninsured or insured with a competitor) and renewal business (i.e., customers that are currently
insured by us) separately since they react differently to price levels. Some factors affecting an individual’s
likelihood to renew their policy, or purchase a new policy include:

1) Price of competing products (more likely to choose cheaper, all else equal)
2) Overall cost of the product (if cheap relative to disposable income then less price sensitive)
3) Rate changes (significant increases can motivate insureds to shop around)
4) Characteristics of the insured (young insureds are generally more price sensitive than older insureds)
5) Customer satisfaction and brand loyalty (the higher each is the more likely the insured is to renew)

The remainder of the chapter will cover marketing constraints.

Traditional techniques for incorporating marketing considerations

Marketing considerations will generally be incorporated judgmentally into the ratemaking process. Traditional
marketing information considered includes:

1) Competitive comparisons
2) Close ratios, retention ratios, growth
3) Distributional analysis
4) Dislocation analysis

Competitive comparisons

Ideally, we would like to be able to compare our premium levels to those of our competitors, though getting full
premium data on competitors may be a challenge. Piecing together full rate manuals can be difficult, and even
if done, unfiled underwriting placement criteria makes them marginally useful. Despite this, we will often
compare our premium to our best estimate of our competitors. We are interested in our competitiveness on
average (i.e., having a “base rate advantage”) and by individual risk segments (e.g., young insureds or territory).

We will often examine a set of sample risks using the following metrics:

% = ( ℎ ) − 1.0

$ = − ( ℎ )

# ( . ., )
%=
#

We can examine these metrics in total or by segment—also broken down into buckets (e.g., looking at
exposures per competitive position size bucket (e.g., 5%-10%, 10%-20%, etc.) in addition to the overall
average).

We can also compare rate relativities, though we must take caution when comparing relativities if the rating
algorithms vary by competitor (e.g., comparison of age relativities between companies wouldn’t be comparable

Spring 2020 © James Bedford, FCAS Page 224


Basic Ratemaking Chapter 13: Other Considerations

if one includes both age and other age-related factors while the other only uses age). Due to these differences,
companies commonly compare average total premium by rating characteristic rather than relativities directly.
These comparisons can identify threats or opportunities that exist in the companies rating structure.

Close ratios, retention ratios, growth

Close ratio (a.k.a. hit ratio or conversion rate) measures the rate prospective insureds accept a new business
quote. Like always, we should take care to understand the data used—for example one insurer may count all
quotes in the denominator while another may count one per applicant (e.g., the same applicant may get several
quotes at different limits).

#
=
#

Retention ratios (a.k.a. persistency ratio) measures the rate existing insureds renew their policy upon
expiration. Data definitions are important to document here as well—one insurer may count all expiring
policies in the denominator while another may exclude those that are non-renewed by the company or
individuals that have passed away.

#
=
#

When evaluating these ratios, we will look at their value, but also their change over time. The value of the ratios
can be an indicator of price competitiveness for new (close ratio) and renewal (retention ratio) business.
Changes in these ratios can be used to gauge changes in competitiveness, or measure effectiveness of
implemented changes. If we increase rates (relative to competitors) we would expect these ratios to fall while
if we decrease rates (relative to competitors) we would expect them to increase.

Growth is a function of both attracting new business and retaining existing customers. We can define policy
growth as:

( − )
% ℎ= = − 1.0

where “policies lost” include cancelled and non-renewed policies. Growth rates will be affected by rate level
competitiveness, rate level changes, and any changes in underwriting standards (e.g., tightening underwriting
criteria would slow growth).

Each of the three ratios can be measured in aggregate, or for specific groups of insureds. If these ratios look
worse for a given segment despite having similar competitiveness as in other segments then the given segment
may be more price sensitive, the analysis may contain errors, or something other than price is driving
purchasing decisions. For example, close ratio for young insureds is often lower than other segments, even of
rates are equally competitive by segment, because young insureds are generally more price sensitive and shop
around more often (i.e., get more quotes).

Distributional analysis

Companies often examine the distribution of new and renewal business by classification (i.e., segment). We will
be interested in the distribution as of a point in time, as well as changes in the distribution over time. The
distribution of policies should also be compared to the distribution in the general population (e.g., compare the
percentage of policies of 20-25-year-old females to the percentage of 20-25-year-old females in the general
population). Differences between our book of business’ distribution and the general population’s distribution
could identify issues such as poor marketing to certain segments or inadequate agent placement (if a
geographical variable, or variable correlated with geography). Changes in the distribution could identify
changes in competitiveness due to rate changes or identify where competitors have begun to target certain
segments of the market.

Spring 2020 © James Bedford, FCAS Page 225


Basic Ratemaking Chapter 13: Other Considerations

Dislocation analysis

Policyholder dislocation analysis measures the effect rate changes have on existing customer’s policies. For
example, we may implement a positive 4.0% rate increase, but the effect on individual insureds will vary widely
depending on the magnitude of the changes in relativities. The following is a sample dislocation graph showing
the dispersion in rate changes by insured. We can use this information to estimate the effects on retention. We
will also often have an acceptable level of rate increases before the effects on retention would be too great. We
can share this information with our sales channel and customer support units—regulators often want to see
this information as well.

In addition to the aggregate level, we will often look at dislocation information for key segements and by each
rating level being adjusted.

Systematic techniques for incorporating marketing considerations

The marketing considerations discussed previously are incorporated judgmentally. Some companies are now
using more advanced techniques to optimize pricing more systematically.

Lifetime value analysis

The traditional actuarial methods develop a cost-based rate indication over the time period rates are expected
to be in effect, assuming all insureds renew. Lifetime value analysis takes into consideration the profitability of
future periods, accounting for changes in premiums, loss costs, and renewal rates. For example, if we examine
the 16-year-old and 68-year-old drivers, presented in the following tables, in the traditional sense over a one-
year horizon (the first row) we would conclude the 16-year-old is a worse risk to write than the 68-year-old
based on one-year profit. If we instead look over a longer time horizon, accounting for expected retention rates
as well as changes in expected expenses, losses, and premiums we come to a different conclusion.

In the following tables we estimate the premium, losses, and expenses for each insured over the periods in our
time horizon (our example looks over a 5-year horizon). The premium and losses for the 16-year-old are both
expected to decrease over time. The premium stays constant for the 68-year-old while losses increase over
time. Expenses reduce for both after the first year reflecting that expenses on renewal policies are generally
lower. We estimate the probability of renewal for each period in Col (6) and calculate the cumulative
persistency in Col (7)—younger insureds are more likely to non-renew, but each becomes more likely to renew
as tenure grows. The adjusted profit in Col (8) is the profit from Col (5) adjusted to reflect the probability the
insured will still be active. Col (9) finds the present value of the adjusted profit, assuming a 5.0% discount rate.

Spring 2020 © James Bedford, FCAS Page 226


Basic Ratemaking Chapter 13: Other Considerations

Col (10) is the present value to the adjusted premium (reflecting the probability the insured will still be active).
The profit % is then the present value of the adjusted profit divided by the present value of the adjusted
premium. As we can see, the 68-year-old is a profitable risk the first two years but turns unprofitable in the
third year and is unprofitable overall. Conversely, the 16-year-old is unprofitable over the first two years but
turns profitable in year three and is profitable overall. If we are interested in profitability over a five-year time
horizon we would prefer the 16-year-old risk (while a traditional analysis would select the 68-year-old since
they are more profitable in year one).

Five-Year Time Horizon for 16-Year-Old


Renewal Cumulative PV of Adj PV of Adj
Year Age Prem Losses Expense Profit Prob. Persistency Adj Profit Profit Premium Profit %
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
1 16 $1200 $1210 $35 -$45 100% 100.00% -$45.00 -$45.00 $1200.00 -3.75%
2 17 $1140 $1145 $15 -$20 75% 75.00% -$15.00 -$14.29 $814.29 -1.75%
3 18 $1083 $1030 $15 $38 75% 56.25% $21.38 $19.39 $552.55 3.51%
4 19 $1029 $950 $15 $64 80% 45.00% $28.73 $24.82 $399.94 6.21%
5 20 $977 $865 $15 $97 80% 36.00% $35.07 $28.85 $289.48 9.97%
Total $5,429 $5,200 $95 $134 $25.17 $13.77 $3,256.26 0.42%

Five-Year Time Horizon for 68-Year-Old


Renewal Cumulative PV of Adj PV of Adj
Year Age Prem Losses Expense Profit Prob Persistency Adj Profit Profit Premium Profit %
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
1 68 $800 $750 $35 $15 100% 100.00% $15.00 $15.00 $800.00 1.88%
2 69 $800 $778 $15 $7 95% 95.00% $6.65 $6.33 $723.81 0.88%
3 70 $800 $806 $15 -$21 96% 91.20% -$19.15 -$17.37 $661.77 -2.63%
4 71 $800 $840 $15 -$55 97% 88.46% -$48.66 -$42.03 $611.35 -6.88%
5 72 $800 $860 $15 -$75 98% 86.69% -$65.02 -$53.49 $570.59 -9.38%
Total $4,000 $4,034 $95 -$129 -$111.18 -$91.56 $3,367.52 -2.72%

(5)= (2) - (3) - (4)


(7)= (6) x (7) Prior
(8)= (5) x (7)
(9)= (8) discounted by 5% per annum
(10)= (2) x (7) discounted by 5% per annum
(11)= (9) / (10)

Optimized pricing

Originally, multivariate techniques were only used to model loss costs. They are now being used to model
renewal and conversion rates (i.e., demand models). These models are used to estimate the probability a new
business customer will accept a quote (i.e., conversion model) or an existing customer will renew (i.e., retention
model). These models will be built on datasets that include relevant information about each observation such
as amount of premium quoted, rate change (for retention models), and a measure of competitiveness, as well
as the outcome (i.e., if the quote accepted).

A loss cost model and demand model can be used in unison to estimate expected premium, losses, and total
profit for a given rate proposal. Using these models, we can test the effects of several rate change scenarios and
select the change that best meets our profitability and volume goals. Scenario testing is a precursor to full price
optimization—running thousands of scenarios to pick the optimal profit while achieving volume goals (or
optimizing volume while achieving minimum profit goals). These models range between simple and complex—
complex models incorporate longer timeframes as well as the propensity to cross sell other lines of business
(i.e., writing the auto may make writing the homeowners more likely). These models allow actuaries to more
systemically incorporate customer demand, rather than basing decisions on judgement alone.

Spring 2020 © James Bedford, FCAS Page 227


Basic Ratemaking Chapter 13: Other Considerations

Regulatory and operational considerations still apply—in recent years regulators have been pushing back on
price optimization in many states.

Underwriting cycles

The insurance industry has historically experienced cyclical results—when implementing rate changes, it is
important to understand where we are in this cycle and make changes appropriately. We use the terms “hard
market” and “soft market” to identify the tops and bottoms of this cycle.

Hard markets are periods of higher price levels and increased profitability for insurers. Companies will
generally respond to this increased profitability by trying to increase their market share, thus reducing their
prices and putting pressure on competitors to also reduce their prices. This will generally lead to a soft
market— a period with lower prices and lower profitability for insurers. Companies will then generally
respond by taking appropriate rate increases, relieving competitive pressures, and the cycle begins again.

The text presents the following useful picture to demonstrate the underwriting cycle:

Soft Profits
Worsen
Market
Volume
Rates
Targets
Decreased
Reduced

Supply Demand
Exceeds Exceeds
Demand Supply

Volume
Rates
Targets
Increased
Increased Hard
Market
Profits
Improve

Spring 2020 © James Bedford, FCAS Page 228


Basic Ratemaking Chapter 14: Implementation

Basic Ratemaking

Chapter 14 – Implementation
Introduction

Prior chapters discussed the techniques used to project the fundamental insurance equation to the prospective
period (as well as reasons we may need to deviate from the indicated changes). This chapter discusses the
specific adjustments needed to calculate final rates when making a rate change, as well as non-pricing solutions
available to balance the fundamental insurance equation. The final part of the chapter discusses communicating
the expected effects of the rate change to key stake holders (e.g., regulators or company management) as well
as results monitoring after implementation.

Non-pricing solutions to fix an imbalance in the fundamental insurance equation

If the projected fundamental insurance equation is not in balance (this section assumes there is an indicated
rate increase) the company can bring it in balance by reducing its costs (non-pricing solutions) or increasing
rates, or a combination of both. This section will discuss non-pricing solutions.

Reduce expense

The company can try to balance the fundamental insurance equation by reducing LAE (EL), fixed expenses (EF),
or variable expenses (V). These can be accomplished through reductions of things such as marketing budgets
or staffing levels.

Reduce loss costs

The company can also try to balance the fundamental insurance equation by reducing expected losses (L). The
three primary ways the can be achieve are:

1) Changing the make-up of the portfolio through tightening underwriting criteria or non-renewing
policies. Presumably this would change both loss and premium levels, but we would aim to remove
risks with inadequate premium hopefully reducing losses by more than the reduction in premium.
2) Reducing coverage levels (e.g., removing a covered peril or reducing sub-limits).
3) Implementing better loss controls (e.g., a workers’ compensation insurer implementing a return-to-
work program).

To the extent the company is able to accomplish non-pricing changes in the book of business the actuary should
adjust the projections and recalculate the overall rate indication.

Pricing solutions for an existing product

In most cases non-pricing solutions alone will not be sufficient and the company will decide to implement a
change to their rates. This section will cover the specific adjustments that need to be made to accomplish a
balanced rate change for an existing product (the last part of this chapter will discuss new products).

To calculate final rates, we must select a proposed premium level for the prospective period, finalize the
structure of the rating algorithm, select final rate relativities for each rating variable, and derive the base rate
that achieves the overall proposed premium.

In order to find the base rate that achieves our proposed premium, we must split the premium by the piece that
varies by risk characteristic (i.e., varies by the base rate and rate relativities) and the portion that does not (i.e.,
the fixed expenses). Looking at the pure premium indication formula, along with our knowledge that loss and
(A)LAE ( + ) vary by the base rate and relativities while fixed expenses ( ) do not, we can split the
premium formula between variable premium and flat or additive premium (where the P subscript on the
premium and additive premium, A, represents “proposed”).

Spring 2020 © James Bedford, FCAS Page 229


Basic Ratemaking Chapter 14: Implementation

+ + +
= = + = .+
1− − 1− − 1− −
Variable Prem. Additive Prem.

The variable premium is then equal to the proposed premium minus the additive premium.

.= −

The proposed levels for the variable, additive, and/or total premium can be found as usual using the proceeding
formulas. The additive premium is denoted AP in the prospective period and AC in the current period. In addition
to fixed fees, some rating algorithms have other additive premium components which would be given on the
exam and added to additive premium calculated above (if given as a premium, if given as a cost divide by
1− − to convert to a premium by loading for variable expenses and profit).

The text example rating algorithm uses additive discounts, meaning we add up the discounts then subtract from
one rather than take the product of one minus each discount individually. Using this convention along with the
base rate (B), two multiplicative rating variables (R1 and R2), two discounts (D1 and D2), additive premium
(A), and exposure (X), we can find the prospective premium for a given risk as:

, = ∗ 1 , ∗ 2 , ∗ 1.00 − 1 , − 2 , + ∗

This formula isn’t as intimidating as it may look upon first glance—the subscripts are what make it look more
difficult than it is. The P subscript indicates we are evaluating the prospective period, while i represents each
level of the first rating variable R1, j represents the level of R2, k represents the level of D1, and m represents
the level of D2. Therefore, the premium for a specific risk at the ith level of R1, jth level of R2, kth level of D1, mth
level of D2, with exposure level X can be founding using the formula above— it is simply the product of the
base rate, rating variables and discounts, plus an additive premium, times the exposure.

The remainder of this section will use the following differentials (relativities) for each rating variable and
discount in the examples.

Current Proposed Current Proposed


R1 Exposures Differential Differential D1 Exposures Discount Discount
1 152,500 0.8000 0.9000 Y 156,625 5.00% 5.00%
2 570,000 1.0000 1.0000 N 712,875 0.00% 0.00%
3 147,000 1.2000 1.2500

Current Proposed Current Proposed


R2 Exposures Differential Differential D2 Exposures Discount Discount
A 235,000 1.0000 1.0000 Y 153,625 10.00% 5.00%
B 480,000 1.0500 0.9500 N 715,875 0.00% 0.00%
C 154,500 1.2000 1.3000

Spring 2020 © James Bedford, FCAS Page 230


Basic Ratemaking Chapter 14: Implementation

Derivation of base rate

Once we know our rating structure and have finalized the relativities we must find the base rate that results in
our proposed average premium. If there are no changes to rate relativities (differentials) this calculation is
rather straightforward. If there are changes to rate relativities the calculation of the base rate is trickier and
can be done using the following methods:

1) Extension of exposures method


2) Approximated average rate differentials method
3) Approximated change in average rate differentials method

No rate relativity changes

If there are no changes to the rate relativities, finding the new base rate is straightforward. The proposed base
rate is equal to the current base rate multiplied by the proposed average variable premium divided by the
current average variable premium

− .
= ∗ = ∗
− .

If there are no additive premiums then the proposed base rate is simply the current base rate times the
proposed average premium divided by the current average premium.

More commonly, our rate relativities will change in our rate revision. When that is the case we will use one of
the following three methods to estimate the new base rate.

Extension of exposures method

The extension of exposures method re-rates policies using a seed base rate, BS. We will use the resulting average
premium level to adjust the seeded base rate to result in the proposed average premium level. The seed base
rate we start with is not overly important, but we want to select something reasonable to begin with—the
current base rate times the indicated rate change would be a good initial seed.

The text gives us the following formula to calculate the average premium, , using the seeded base rate, BS. The
numerator looks a bit intimidating, but it is just the sum of the re-rated premium across all variables and
discounts (remember, the discounts here are added together, as discussed before). This calculation will be done
across some distribution of policies—likely the latest in-force distribution.

∑ ∑ ∑ ∑ ∗ 1 , ∗ 2 , ∗ 1.00 − 1 , − 2 , + ∗ − .
= =

which can be simplified to:

∑ ∑ ∑ ∑ 1 , ∗ 2 , ∗ 1.00 − 1 , − 2 , ∗
= ∗ +

The following example should hopefully make this more clear. We begin with a seed base rate value of $215.
The additive premium is $25. We are also given the exposures by segment in Col (1) along with the rating
characteristics of each segment in Cols (2) – (5). We can look up the relativities and discounts in Cols (6)-(9)
from the rate tables given earlier. With that information we can calculate the re-rated premium in Col (10).
The sum of the re-rated premium is the numeration of the first equation above. We divide by the total
exposures to find the average premium using the seeded base rate, , as $246.83.

Spring 2020 © James Bedford, FCAS Page 231


Basic Ratemaking Chapter 14: Implementation

BS $215
AP $25

Exposures R1 R2 D1 D2 R1 R2 D1 D2 Premium Using B S


(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
10,000 1 A Y Y 0.90 1.00 0.05 0.05 $1,991,500.00
7,500 2 A Y Y 1.00 1.00 0.05 0.05 $1,638,750.00
3,000 3 A Y Y 1.25 1.00 0.05 0.05 $800,625.00
9,000 1 B Y Y 0.90 0.95 0.05 0.05 $1,713,982.50
20,000 2 B Y Y 1.00 0.95 0.05 0.05 $4,176,500.00
5,000 3 B Y Y 1.25 0.95 0.05 0.05 $1,273,906.25
1,875 1 C Y Y 0.90 1.30 0.05 0.05 $471,365.63
5,000 2 C Y Y 1.00 1.30 0.05 0.05 $1,382,750.00
2,000 3 C Y Y 1.25 1.30 0.05 0.05 $678,875.00
3,500 1 A N Y 0.90 1.00 0.00 0.05 $730,887.50
7,500 2 A N Y 1.00 1.00 0.00 0.05 $1,719,375.00
3,500 3 A N Y 1.25 1.00 0.00 0.05 $981,093.75
15,000 1 B N Y 0.90 0.95 0.00 0.05 $2,994,506.25
36,000 2 B N Y 1.00 0.95 0.00 0.05 $7,885,350.00
9,000 3 B N Y 1.25 0.95 0.00 0.05 $2,407,921.88
3,750 1 C N Y 0.90 1.30 0.00 0.05 $989,896.88
10,000 2 C N Y 1.00 1.30 0.00 0.05 $2,905,250.00
2,000 3 C N Y 1.25 1.30 0.00 0.05 $713,812.50
3,500 1 A Y N 0.90 1.00 0.05 0.00 $730,887.50
7,500 2 A Y N 1.00 1.00 0.05 0.00 $1,719,375.00
3,500 3 A Y N 1.25 1.00 0.05 0.00 $981,093.75
15,000 1 B Y N 0.90 0.95 0.05 0.00 $2,994,506.25
36,000 2 B Y N 1.00 0.95 0.05 0.00 $7,885,350.00
9,000 3 B Y N 1.25 0.95 0.05 0.00 $2,407,921.88
3,750 1 C Y N 0.90 1.30 0.05 0.00 $989,896.88
10,000 2 C Y N 1.00 1.30 0.05 0.00 $2,905,250.00
5,000 3 C Y N 1.25 1.30 0.05 0.00 $1,784,531.25
48,000 1 A N N 0.90 1.00 0.00 0.00 $10,488,000.00
112,500 2 A N N 1.00 1.00 0.00 0.00 $27,000,000.00
25,000 3 A N N 1.25 1.00 0.00 0.00 $7,343,750.00
11,000 1 B N N 0.90 0.95 0.00 0.00 $2,297,075.00
250,000 2 B N N 1.00 0.95 0.00 0.00 $57,312,500.00
65,000 3 B N N 1.25 0.95 0.00 0.00 $18,220,312.50
28,125 1 C N N 0.90 1.30 0.00 0.00 $7,777,968.75
68,000 2 C N N 1.00 1.30 0.00 0.00 $20,706,000.00
15,000 3 C N N 1.25 1.30 0.00 0.00 $5,615,625.00
869,500 $214,616,391.88

(11) = = $246.83

(10) = (1) * (B S * R1 * R2 * (1.0 - D1 - D2 ) + A P )


(11) = (10) Total / (1) Total

Now that we have the average premium using the seeded base rate, , we can calculate the base rate needed
to produce the proposed average premium as:

Spring 2020 © James Bedford, FCAS Page 232


Basic Ratemaking Chapter 14: Implementation

− .
= ∗ = ∗
− − .

Note this equation is very similar to the adjustment to the current base rate in the scenario there are no changes
to the rate relativities. The difference is that here we are adjusting the seeded base rate, and the denominator
is the average variable premium in the re-rated data set using the proposed additive premium.

Assuming our proposed average premium is $250, we can find the proposed base rate as:

− $250.00 − $25
= ∗ = $215 ∗ = $218.07
− $246.83 − $25

If we instead have a proposed rate change, because we are using the loss ratio method for our rate indication,
we know the proposed premium is simply the current premium multiplied by the rate change factor:

− (1 + %) ∗ −
= (1 + %) ∗ ; − : = ∗ = ∗
− −

Approximated average rate differentials method

If re-rating every policy is not practical, we can estimate the needed base rate using a weighted average
proposed rate differential across all rating variables, ̅. Using the simplified formula from earlier which
showed the average premium as a function of the base rate, average rate differential, and additive premium
(using proposed premium and base rate instead of the seed values here) we can solve for the proposed base
rate, , as follows:

∑ ∑ ∑ ∑ 1 , ∗ 2 , ∗ 1.00 − 1 , − 2 , ∗
= ∗ +

= ∗ ̅+


= =
̅ ℎ . .

We will know the proposed average premium level, , and proposed additive premium, , so to find the
proposed base rate, , we just need to find (or more specifically, approximate) the weighted average proposed
rate differential, ̅. To approximate the weighted average proposed rate differential, ̅, we find the weighted
average rate relativity for each rating variable and discount, then apply them to the multiplicative portion of
our rating algorithm (i.e., excluding the base rate and additive premium). Using the rate relativities and
discounts given at the beginning of this section we can find the following weighted average relativities (using
exposures as the weight):

Spring 2020 © James Bedford, FCAS Page 233


Basic Ratemaking Chapter 14: Implementation

Proposed Proposed
R1 Exposures Differential D1 Exposures Discount
1 152,500 0.9000 Y 156,625 5.00%
2 570,000 1.0000 N 712,875 0.00%
3 147,000 1.2500 Total 869,500 0.90%
Total 869,500 1.0247
Proposed Proposed
R2 Exposures Differential D2 Exposures Discount
A 235,000 1.0000 Y 153,625 5.00%
B 480,000 0.9500 N 715,875 0.00%
C 154,500 1.3000 Total 869,500 0.88%
Total 869,500 1.0257
̅≈ 1.0323

The multiplicative portion of our rating algorithm is 1 , ∗ 2 , ∗ 1.00 − 1 , − 2 , , excluding the


base rate and additive premium, so we can approximate ̅ as 1.0247*1.0257*(1 - 0.0090 - 0.0088)=1.0323 in
this example.

Assuming the same $250 proposed average premium and $25 additive premium we can find the proposed base
rate as:

− $250 − $25
= = = $217.96
̅ 1.0323

This is slightly different than the base rate found using extension of exposures ($218.07) due to this method
being an approximation. The difference is due to using exposures as the weight, which ignores any correlations
between exposures (the same issue we saw with univariate rating methods). We could improve on this
approximation by instead using on-level variable premium at the base level—base level meaning variable
premium divided by the current rate relativity. The on-level premium would also need to have been adjusted
at the class level—if done at the aggregate level we gain no information since the rate changes applied to each
class are the same (this is fine for the overall rate indication, but not sufficient for any analysis done by
classification).

Approximated change in average rate differentials method

If we are unable to re-rate each policy using the proposed relativities (extension of exposures method) and
unable to calculate the weighted average proposed rate relativity across all rating variables (approximated
average rate differentials method), and the variable premium portion of the rating algorithm is entirely
multiplicative, we can instead estimate the base rate using an estimate of the change in average rate differential
(thus only needing to evaluate those factors that are changing).

Recall that when there are no changes to the rate relativities we can find the base rate that will result in the
proposed premium as:


= ∗

This approximated change in average rate differentials method is very similar—we just add a factor to account
for the amount of change already offset by changes in relativities (“relativities” and “differentials” are used
interchangeably):

− 1.0
= ∗ ∗
− (1.0 + ∆ %)

Spring 2020 © James Bedford, FCAS Page 234


Basic Ratemaking Chapter 14: Implementation

where ∆ % is the percent change in average rate relativity. To follow the intuition, assume our average rate
relativity went up relative to the prior relativities—our base rate would then need to go up less that it would
have otherwise, which is accounted for by the rightmost factor in the prior equation being less than 1.0 (1.0 +
∆ % would be greater than 1.0 as the average relativity went up). The converse is also true if the average rate
relativity goes down.

The text divides each side of the equation by and substitutes = (1 + %) ∗ (as seem before) resulting
in the following equivalent formula for the proposed base rate adjustment:

(1 + %) ∗ − 1.0
1+ %= ∗
− (1.0 + ∆ %)

We can find the average change in rate differential (1.0 + ∆ %) as the product of the change in each individual
rating factor’s relativity. Ideally, we would find the change in each individual relativity using a variable
premium weighted average but this information is often difficult to obtain and the calculation becomes more
difficult if there are additive components in the rating algorithm. For this reason, we will instead use exposure
weighted average relativities, though this does introduce bias due to exposure correlations.

Finding the differential changes for the two rating variables (R1 and R2) is relatively straightforward—it is the
ratio of the exposure weighted average relativities. The differential change for the discounts is slightly
trickier—we must find the change in the total discount (since the individual discounts are additive, not
multiplicative, though the total discount is multiplicative). To find the change in the discount’s differential we
first need to find the current and proposed total discounts as one minus the exposure weighted individual
discounts, then find the change.

The total change in average rate relativity is then approximated as the product of the three differential
(relativity) changes.

Current Proposed Current Proposed


R1 Exposures Differential Differential D1 Exposures Discount Discount
1 152,500 0.8000 0.9000 Y 156,625 5.00% 5.00%
2 570,000 1.0000 1.0000 N 712,875 0.00% 0.00%
3 147,000 1.2000 1.2500 Total 869,500 0.90% 0.90%
Total 869,500 0.9987 1.0247
1.0247
Differential Change = 1.0260 =
.9987
Current Proposed
D2 Exposures Discount Discount
Current Proposed Y 153,625 10.00% 5.00%
R2 Exposures Differential Differential N 715,875 0.00% 0.00%
A 235,000 1.0000 1.0000 Total 869,500 1.77% 0.88%
B 480,000 1.0500 0.9500
= 1.0 − .0090 − .0177
C 154,500 1.2000 1.3000
Total 869,500 1.0631 1.0257 Total Discount Current Proposed
1.0257
Differential Change = 0.9648 =
1.0631
Rate Differential 0.9733 0.9822
.9822
Differential Change = 1.0091 =
.9733

1.0 + ∆ % ≈ 0.9989
= 1.0260 ∗ 0.9648 ∗ 1.0091

The text shows several (ugly) formulas for these calculations we have just done, but if you understand how to
do the calculation, being able to write them out in mathematical notation shouldn’t be needed for the exam.

Given the following information we can now estimate the base rate needed to result in our proposed
premium (assume the additive premium does not change):

Spring 2020 © James Bedford, FCAS Page 235


Basic Ratemaking Chapter 14: Implementation

Current Base Rate $ 210.00


Current Average Premium $ 242.13
Proposed Rate Change 3.25% ∆%
Additive Premium $ 25.00 =

1.0325 ∗ $242.13 − $25 1.0


= $210 ∗ ∗ = $217.84
$242.13 − $25 0.9989

Again, this base rate is slightly different than the base rate found using extension of exposures ($218.07) due
to this method being an approximation. Using variable premiums as weights, as discussed earlier, would
improve the accuracy of the estimation.

Other considerations

Additional items we need to consider when implementing new rates include the following:

1) Minimum premium
2) Limit on the premium effect of a single variable
3) Premium transition rules
4) Expected distribution

Minimum premium

Some rating algorithms have a minimum premium requirement which is intended to ensure the premium
covers expected fixed expenses plus some minimum expected. The effect of this minimum is calculated as:


= − 1.0

To adjust the base rate to account for this additional premium collected via the minimum premium, we multiply
the base rate by the following offset factor:

1.0 ℎ
= =
1.0 + ℎ

Limit on the premium effect of a single variable

In practice we may decide to limit the rate change on any single variable. If we limit the relativity for any level
of the rating variable, the proposed average rate differential will go down across all levels of the variable, which
will require us to increase the proposed base rate to get to the proposed overall average premium.

The following example is for a rating variable with four levels. We are proposing an overall rate increase of
10.0% while limiting the change to any one rating level to 15.0%. Prior to limiting any changes, we have the
following information from our rate analysis:

Spring 2020 © James Bedford, FCAS Page 236


Basic Ratemaking Chapter 14: Implementation

Off- Selected Premium on


Current Proposed Relativity Balance Overall Total Proposed
Level Premium Relativity Relativity Change Factor Change Change Rates
(1) (2) (3) (4) (5) (6) (7) (8) (9)
1 $175,000 0.8500 0.9500 11.76% 0.9650 10.00% 18.63% $207,611
2 $250,000 1.0000 1.0000 0.00% 0.9650 10.00% 6.15% $265,367
3 $200,000 1.1000 1.1200 1.82% 0.9650 10.00% 8.08% $216,154
4 $125,000 1.2500 1.2800 2.40% 0.9650 10.00% 8.69% $135,868
Total $750,000 3.63% 0.9650 10.00% 10.00% $825,000

(5) = (4) / (3) - 1.0


(5) Total = (5) weighted by (2)
(6) = 1.0 / (1.0 + (5) Total)
(8) = (1.0+ (5)) * (6) * (1.0+ (7)) - 1.0
(9) = (2) * (1 + (8))

As we can see, the total change to Level 1 is greater than our intended 15.0% cap. We can solve for the Level 1
relativity (X) that results in a 15.0% change above by setting the revised relativity change (X/.85) times the off-
balance factor times the overall change factor equal to 1.15 then solve for the proposed relativity: /.85 ∗
.9650 ∗ 1.10 = 1.15 solving for X = 0.9209 (or equivalently, reducing the original proposed relativity by the
amount capped; 0.9500 * 1.1500/1.1863 = 0.9209). This is not our final relativity though, since adjusting this
relativity with no other changes would result in the overall rate change being less than our 10.0% target (i.e.,
there will be a premium shortfall from Level 1). We can calculate the shortfall as the proposed premium for
Level 1 ($207,611) minus the capped premium ($175,000 ∗ 1.15 = $201,250), or $6,361.

We must load this shortfall into the other levels to achieve our overall 10.0% proposed rate increase. The load
into Levels 2-4 is the premium shortfall divided by the proposed premium from Levels 2-4, or 1.030% (=
$6,361/($265,367 + $216,154 + $135,868)). Instead of increasing the Level 2-4 relativities (which would
result in the base factor being greater than 1.0), we can equivalently load an additional 1.030% into the off-
balance factor while reducing the Level 1 relativity by an offsetting amount (since not doing so would result in
a change greater than 15.0%). This would result in a final Level 1 relativity of 0.9209/1.0103 = 0.9115 and
off-balance factor of 0.9650 ∗ 1.0103 = 0.9749 (as we will see next, this is also the off-balance factor we would
find when calculated using the weighted average relativity change with the new Level 1 relativity).

Plugging these values in we can see that this results in the desired 15.0% capped change to Level 1 while
accomplishing our overall target rate increase of 10.0%. (The calculated off-balance factor from above is also
equal to 1.0 divided by 1.0 plus the weighted average relativity change, confirming our off-balance and relativity
adjustments were proper)

Off- Selected Premium on


Current Proposed Relativity Balance Overall Total Proposed
Level Premium Relativity Relativity Change Factor Change Change Rates
(1) (2) (3) (4) (5) (6) (7) (8) (9)
1 $175,000 0.8500 0.9115 7.24% 0.9749 10.00% 15.00% $201,250
2 $250,000 1.0000 1.0000 0.00% 0.9749 10.00% 7.24% $268,101
3 $200,000 1.1000 1.1200 1.82% 0.9749 10.00% 9.19% $218,381
4 $125,000 1.2500 1.2800 2.40% 0.9749 10.00% 9.81% $137,268
Total $750,000 2.57% 0.9749 10.00% 10.00% $825,000

If the level the is being capped is the base class the calculation is a little bit different. We will apply the same
15.0% cap in the example below where the level being capped is the base class.

Spring 2020 © James Bedford, FCAS Page 237


Basic Ratemaking Chapter 14: Implementation

Off- Selected Premium on


Current Proposed Relativity Balance Overall Total Proposed
Level Premium Relativity Relativity Change Factor Change Change Rates
(1) (2) (3) (4) (5) (6) (7) (8) (9)
1 $175,000 0.8500 0.7000 -17.65% 1.0981 10.00% -0.53% $174,074
2 $250,000 1.0000 1.0000 0.00% 1.0981 10.00% 20.79% $301,965
3 $200,000 1.1000 0.9950 -9.55% 1.0981 10.00% 9.26% $218,513
4 $125,000 1.2500 1.0800 -13.60% 1.0981 10.00% 4.36% $130,449
Total $750,000 -8.93% 1.0981 10.00% 10.00% $825,000

(5) = (4) / (3) - 1.0


(5) Total = (5) weighted by (2)
(6) = 1.0 / (1.0 + (5) Total)
(8) = (1.0+ (5)) * (6) * (1.0+ (7)) - 1.0
(9) = (2) * (1 + (8))

In this case we would need to reduce the Level 2 relativity by 1.1500/1.2079 = 0.9521, which would result in
a shortfall of $14,465 ($301,965 − $250,000 ∗ 1.15). Using the same logic from the prior example, we would
multiply the Level 2 relativity by 0.9521, but since this is our base class we will re-base each relativity instead
by dividing by the reduced Level 2 relativity, resulting in a Level 2 relativity of 1.00 and an increase in the other
relativities of 1.0503 (= 1.00/0.9521). In addition, we need to further increase the other level’s relativities to
make up for the shortfall from capping Level 2—this increase is the shortfall divided by the proposed premium
in the other levels, or 2.766% (= $114,465/($174,074 + $218,513 + $130,449)). In total, we increase the non-
base levels by a factor of 1.0503 ∗ 1.02766 = 1.0794. In this situation we do not directly change the off-
balance—both adjustments are applied to the relativities themselves—the resulting off-balance is calculated
as 1.0 over 1.0 plus the overall relativity change.

Again, plugging these values in we can see that this results in the desired 15.0% capped change to Level 2 while
accomplishing our overall target rate increase of 10.0%.

Off- Selected Premium on


Current Proposed Relativity Balance Overall Total Proposed
Level Premium Relativity Relativity Change Factor Change Change Rates
1 $175,000 0.8500 0.7556 -11.11% 1.0455 10.00% 2.22% $178,888
2 $250,000 1.0000 1.0000 0.00% 1.0455 10.00% 15.00% $287,500
3 $200,000 1.1000 1.0740 -2.37% 1.0455 10.00% 12.28% $224,556
4 $125,000 1.2500 1.1657 -6.74% 1.0455 10.00% 7.25% $134,056
Total $750,000 -4.35% 1.0455 10.00% 10.00% $825,000

Premium transition rules

The prior section discussed capping the effect for any one rating variable—the effect on any individual insured
will be a function of the changes of all rating variables applicable to that individual, so the effect could be quite
large. We may want to limit the rate increase on any individual to reduce losing customers because of large rate
increases. We can accomplish this through a premium transition rule which sets a minimum and/or maximum
change in premium for any individual—any capped rate change will be made in the following year(s). For
example, if we decide to cap the maximum increase at 25%, but an individual has an indicated rate increase of
35%, we would cap the rate increase in the first renewal at 25% and take the remaining 8% (1.35/1.25-1.0)
increase (plus any additional changes) at the second renewal.

Premium transition rules are generally set up so that only changes in company rates apply—for example, if
there is a change in insured characteristics (e.g., involved in an accident, or buys a new car) the effects of those
changes would not count—we could calculate the new premium on new characteristics and old premium on
new characteristics to determine if a cap applies. We also need to define the time period the transition applies—

Spring 2020 © James Bedford, FCAS Page 238


Basic Ratemaking Chapter 14: Implementation

one additional renewal is often times the best option as to avoid the complexity that results if there are multiple
overlapping transitions. The text mentions that we will need to account for the effect the caps have on the base
rate but doesn’t go into detail on the calculation.

Expected distribution

All three methods used for calculating a base rate when relativities change used an exposure distribution in the
calculation. We want a distribution that represents the expected exposures in the prospective period, but we
don’t know this distribution so we must estimate it, commonly as the latest in-force distribution. If rate changes
are small and applied relatively uniformly by insured, using the lasted in-force distribution is likely very
reasonable since we wouldn’t expect significant changes. If we implement large rate changes that vary
significantly by insured we would expect large changes in the distribution of our book of business—this will
affect our base rate calculation, and if risks are not equally profitable it would affect our overall rate level.

Pricing solutions for a new product

When calculating rates for new insurance products we generally will not have the internal data needed to
develop rates. In these situations, we will rely on information from rating bureaus and/or competitors rates,
making proper adjustments for our situation. When using competitor information, we need to take care since
there may be many differences between their company and ours that would result in different costs and
premiums. Examples include underwriting, claims handling, and other operations—we need to use judgement
to adjust.

A sample adjustment would be if our company expects our variable expenses will be higher than the
competitor—we would multiply the base rate and expense fee (additive fixed premium) by the competitor’s
variable permissible loss ratio divided by our variable permissible loss ratio. Another example would be if we
expect a different level of expected losses—we could adjust the base rate by the expected percentage difference.

Spring 2020 © James Bedford, FCAS Page 239


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

Basic Ratemaking

Chapter 15 – Commercial Lines Rating Mechanisms


Introduction

The text so far has focused on manual (classification) ratemaking (i.e., setting rates that can be found using a
rating manual for insureds with similar characteristics, not by individual). Grouping commercial risks into
homogenous classifications is often not feasible and many commercial risks are large enough to use their own
historic experience in whole or in part to develop their own future rate. This chapter will cover methods
actuaries have developed to price these commercial risks.

This chapter will look at methods that modify manual rates (experience rating and schedule rating) as well as
methods that develop a premium for large commercial risks (composite rating, large deductible policies, and
retrospective rating policies). Commercial risks may be subject to more than one of these rating mechanisms
at a time, depending on the policy provisions selected.

Manual rate modification plans

As the name suggests, manual rate modification plans start with a manual rate and adjust based on the
experience of the company being insured (experience rating) and/or characteristics not reflected in the manual
rates or experience (schedule rating).

Experience rating

Experience rating is widely used in commercial lines of insurance. Experience rating is used when the
individual insured’s historical experience (after appropriate adjustments) is predictive of future experience.

Experience rating modifies manual rates using a credibility weighted average of the actual and expected results.
It is important that the expected and actual results are measured on the same basis (e.g., any limits applied are
the same, the timeframe of the comparison is the same, etc.). Experience rating plans can be defined to compare
expected versus actual paid loss (and ALAE), reported loss (and ALAE), ultimate loss (and ALAE) in the
experience periods defined under the plan, or ultimate loss (and ALAE) that has been adjusted to the
prospective timeframe. I put ALAE in parenthesis since an experience rating plan can be defined to include or
exclude ALAE in the comparison.

Experience rating plans generally use 2-5 years of experience when calculating the rate modification (mod)
factor—shorter timeframes will be more reactive to changes but also more volatile (and vice versa). Many
experience rating plans apply a per occurrence limit on the losses entering the modification formula as to limit
the effects of unusually large claims or catastrophic losses. The limits can be defined to apply to losses or losses
and ALAE, though we must apply the same limits to both the actual and expected losses. Similarly, we must
compare actual and expected losses at the same level (ultimate or undeveloped) and at the same timeframe
(unadjusted historical or developed/trended prospective). Experience mods are often subject to a maximum
and minimum. To the extent mods affect the overall premium we will need to have an off-balance adjustment,
similar to those in Chapter 14.

We will now look at (a simplified version of) two specific plans: the ISO commercial general liability experience
rating plan and the NCCI workers compensation experience rating plan.

The ISO commercial general liability (GL) experience rating plan has the following rating mod:

( − )
= ∗

where AER is the actual experience ratio and EER is the expected experience ratio. Losses entering each will be
limited to the basic limit, subject to a maximum single limit per occurrence (MSL), and compared at historic

Spring 2020 © James Bedford, FCAS Page 241


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

ultimate levels (developed to ultimate, but not trended). There is no (1-Z) term since the complement is no
change, so it falls out (i.e., 0 times 1.0 – Z).

The EER will be given on the exam and represents the expected remaining portion of the ultimate basic limits
losses after applying the MSL (i.e., basic limits losses multiplied by the EER will be at basic limits and MSL).
Therefore, 1.0 - EER = portion of basic limit losses expected to be eliminated by the MSL.

The AER is the actual losses limited to the basic limit and MSL plus an IBNR provision, all divided by the
company subject loss cost. The company subject loss cost (CSLC) is the current company basic limits costs (i.e.,
expected losses limited to the basic limit) detrended to the historical period, as this plan compares losses at
untrended actual levels (the CSLC is not limited to the MSL). The IBNR provision used in the AER is the company
subject loss costs times the EER (which will result in expected losses limited to the basic limit and MSL), further
multiplied by the expected percentage unreported to estimate the unreported (IBNR) portion—this is a
Bornhuetter-Ferguson style estimate using expected losses.

The ISO experience rating plan uses the prior three years of exposure. In the following table we are given the
current level basic limits loss and ALAE costs in Col (3)—this is the expected loss and ALAE limited to the basic
limit but not MSL, and at current levels. Since we are comparing losses at historic levels we need to detrend to
the historic period. The detrend factors are given in Col (4) while the detrended company subject basic limits
cost is found in Col (5)—the sum of Col (5) is the CSLC which is the denominator of the AER. We also need to
find the IBNR provision (expected unreported losses subject to the basic limit and MSL). Col (5) is not limited
to the MSL, so we multiply it by the EER, resulting in expected losses limited to both the basic limit and MSL.
We also multiply it by the expected unreported percentage to find the expected unreported losses limited to
the basic limit and MSL (our IBNR provision), calculated in Col (8).

Expected
Current Company Percentage B/L Expected B/L
Company B/L Subject B/L Expected Losses & ALAE Losses & ALAE
Loss & ALAE Detrend Loss & ALAE Experience Unreported at Unreported at
Policy Period Coverage Costs Factors Costs Ratio 3/31/18 3/31/18
(1) (2) (3) (4) (5) (6) (7) (8)
7/1/14-15 Prem/Ops $104,157 0.822 $85,617 0.850 0.217 $15,792
Products $68,715 0.848 $58,270 0.850 0.455 $22,536
7/1/15-16 Prem/Ops $104,157 0.855 $89,054 0.850 0.315 $23,844
Products $68,715 0.893 $61,362 0.850 0.535 $27,905
7/1/16-17 Prem/Ops $104,157 0.901 $93,845 0.850 0.402 $32,067
Products $68,715 0.921 $63,287 0.850 0.621 $33,406
Total $451,436 $155,550

(4) = Reciprocal of the Loss and ALAE Trend.


(5) = (3) x (4)
(6), (7) = Given
(8) = (5) x (6) x (7)

Now we can calculate our credit or debit. We are given the reported losses from the experience period in Row
(1) of the following table. The IBNR provision in Row (2) was calculated in Col (8) of the prior table. The
expected ultimate losses subject to the basic limit and MSL in Row (3) is the sum of Rows (1) and (2). The CSLC
in Row (4) was calculated in Col (5) of the prior table. The AER in Row (5) is the expected ultimate losses limited
to the basic limit and MSL in Row (3) divided by the CSLC in Row (4). The EER and credibility are given in Rows
(6) and (7). We can then calculate the credit/debit in Row (8) using the formula given earlier—in this case we
find a 8.5% debit mod. A positive mod results in a rate increase and is called a debit mod while a negative mod
results in a rate decrease and is called a credit mod.

Spring 2020 © James Bedford, FCAS Page 242


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

(1) Reported Losses and ALAE at 3/31/18 Limited by Basic Limits and MSL $287,131 = Given
(2) Expected Unreported Losses and ALAE at 3/31/18 Limited by Basic Limits and MSL $155,550 = (8) Total Prior Table
(3) Projected Ultimate Losses and ALAE Limited by Basic Limits and MSL $442,681 = (1) + (2)
(4) Company Subject Basic Limit Loss and ALAE Costs (CSLC) $451,436 = (5) Total Prior Table
(5) Actual Experience Ratio (AER) 0.981 = (3) / (4)
(6) Expected Experience Ratio (EER) 0.850 = Given
(7) Credibility (Z) 0.550 = Given
(8) Credit/Debit 8.5% = [(5)-(6)]/(6)*(7)

The NCCI workers compensation experience rating plan is much different from the ISO commercial general
liability plan. The main difference in these simplified versions is that the NCCI plan is a split plan, meaning it
splits losses into primary and excess layers and uses both, while the ISO plan only uses losses limited to the
basic limit and MSL.

A generic split plan would look like the following, where the p subscript represents the primary layer, the e
subscript represents the excess layer, A represents actual losses, and E represents expected losses.

∗ + 1.0 − ∗ + ∗ + (1.0 − )∗
=

This formula looks a bit intimidating but it is just a standard credibility weighting of Z *Actual + (1-Z)*Expected,
split between primary and excess, added together.

The NCCI plan uses a different, but mathematically equivalent modification formula:

+ + ∗ + (1.0 − ) ∗
=
+
where

= = / −
= ℎ = /

The NCCI formula numerator can be thought of as the actual primary losses plus the ballast value plus a
weighted average of actual and expected excess losses using w as the weight given to the actual excess losses.

The primary and excess credibility factors are not expressed directly in the NCCI formula, instead they are
contained in B and w. The values of B and w are looked up from tables provided by the NCCI, based on expected
losses, with both values increasing as expected losses increase.

The NCCI plan uses $5,000 as the split between primary and excess (as far as the exam is concerned, if using on
the job note this value has since been updated several times). Individual losses are split between the primary
( ) and excess ( ) layers. The actual experience (A) used in the plan is the reported losses from the past
three years evaluated at 18, 30, and 42 months respectively.

The expected experience (E) is calculated as actual payroll (in hundreds) times an expected loss rate (ELR).
The ELR represents expected reported losses at each evaluation age (18, 30, and 42 months). The expected
losses are then split between primary and excess using the D-ratio. The D-ratio is the expected primary losses
as a percentage of total, so the expected primary losses ( ) is ∗ -ratio, while the expected excess losses ( )
is ∗ (1.0 − -Ratio).

The following example shows the NCCI mod calculation for a policy that will be in effect 9-1-18. We are given
the following loss information for the prior three policy years—the first and third year have three claims each
while the middle year has four claims. We split each claim between the primary ($5,000 limit) layer and excess
layer. We then sum up each to find the actual primary ( ) and actual excess ( ) losses.

Spring 2020 © James Bedford, FCAS Page 243


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

Reported Primary Excess


Policy Year Claim # Losses Losses Losses
(1) (2) (3)
9/1/14-15 1 $2,500 $2,500 $0
2 $125,000 $5,000 $120,000
3 $32,000 $5,000 $27,000
9/1/15-16 1 $37,500 $5,000 $32,500
2 $62,500 $5,000 $57,500
3 $4,000 $4,000 $0
4 $12,500 $5,000 $7,500
9/1/16-17 1 $17,500 $5,000 $12,500
2 $1,575 $1,575 $0
3 $89,000 $5,000 $84,000
Total $384,075 $43,075 $341,000

(2) = Minimum [ (1), $5,000 ]


(3) = (1) - (2)

Given the payroll, ELRs, and D-ratio below we can calculate the expected primary ( ) and expected excess
( ) losses. Remember, the payroll (divided by 100) times the ELR is the expected losses. The D-ratio is the
primary percent of total losses, so the expected primary losses ( ) are the expected losses times the D-ratio.
The expected excess losses are then the expected losses minus the expected primary losses (or expected
losses times one minus the D-ratio).

Expected Expected
Expected Expected Primary Excess
Policy Year Payroll Loss Rate Losses D-Ratio Losses Losses
(1) (2) (3) (4) (5) (6)
9/1/14-15 $2,318,500 3.47 $80,452 0.22 $17,699 $62,753
9/1/15-16 $3,578,100 2.91 $104,123 0.22 $22,907 $81,216
9/1/16-17 $3,947,150 2.69 $106,178 0.22 $23,359 $82,819
Total $6,401,000 $290,753 $63,966 $226,787

(3) = [ (1) / $100 ] x (2)


(5) = (3) x (4)
(6) = (3) - (5)

If we are told that the weighting value (w) is 0.28 and the ballast value (B) is 35,000 we have everything we
need to solve for the mod:

+ + ∗ + (1.0 − ) ∗ 43,075 + 35,000 + 0.28 ∗ 341,000 + (1 − 0.28) ∗ 226,787


= = = 1.034
+ 290,753 + 35,000

The experience mod in this example is 1.034, which would be applied to the policy’s manual premium. The
resulting premium in called the standard premium.

This example assumed there was only one class. The ELR and D-ratio will vary by class, so if there is more
than one class we would need to expand the expected loss table to include payroll, ELR, and D-ratios by class,
in addition to year.

Spring 2020 © James Bedford, FCAS Page 244


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

Schedule rating

Schedule rating (called such since credits and debits are based on a schedule) is another way to modify the
manual rates. Schedule rating is a way to vary rates for characteristics of the insured that are not captured by
the manual rate and are not reflected in the prior experience (assuming experience rating applies). If we use
schedule rating to adjust for a characteristic that is present in the historic experience, and experience rating
applies, we will be double counting its effect through the experience mod and schedule rating adjustment.

Schedule rating credits and debits are applied by underwriters evaluating the risk. There are generally various
categories evaluated, each with a minimum and maximum credit/debit. The credits/debits are then summed
across categories to find the total schedule credit/debit, which is often subject to an overall minimum and
maximum.

Rating techniques for large commercial risks

The methods covered so far have modified a manual rate. We will now cover methods for developing a premium
for large commercial risks. The methods discussed are loss-rated composite risks, large deductible policies, and
retrospective rating.

Composite rating

Composite rating is a method that uses the risk’s historical losses to develop some or all of the risk’s future
premium. This rating mechanism is often used with risks that are large and complex and have various
exposures for their various insurance policies that are difficult to track throughout the policy period.

The example we will look at is the ISO Composite Rating Plan for Loss-Rated Risks—if the risk is large enough
to be eligible for the plan, their historical experience is implicitly considered 100% credible for the purpose of
developing the composite rate.

In this plan we will develop a composite rate for the prospective period. We do this by first trending and
developing the last five years of reported loss and ALAE to ultimate values at the prospective level, then loading
them for other expenses by dividing by the expected loss & ALAE ratio, to estimate an adjusted premium.

&
( ) 1 =
&

We will compare this adjusted (total) premium (over the five years) to the last five years trended (if applicable)
composite exposures. In this step we trend the selected composite exposure— the risk may have coverages
that use various exposure bases, but we will select one to use for the overall composite rate. If the composite
exposure base is inflation sensitive, like payroll or sales, it needs to be trended, but not if it is a non-inflation
sensitive exposure base like number of vehicles.

The sample risk in the following example has several coverages, and thus several exposure bases from those
coverages. For example, they may have workers compensation insurance with payroll (in hundreds) as the
exposure base, commercial automobile insurance with number of vehicles as the exposure base, general
liability insurance with an exposure base of sales (in thousands). For this risk we may determine sales
(receipts) is the best overall proxy for activity level and risk and use that as the composite exposure—therefore
we would use sales (in thousands) as the exposure base for all perils in composite rating.

1Premium * Loss & ALAE Ratio = Ultimate Loss & ALAE, so we can estimate premium as Ultimate Loss &
ALAE/ Loss & ALAE Ratio

Spring 2020 © James Bedford, FCAS Page 245


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

We are given the reported loss and ALAE, as well as applicable LDFs, by coverage type in the following table. In
our example the selected loss trend factor is 4.0% and we are trending from the average accident date of each
policy period in the experience to the average accident date of the policy being rated which is effective 7/1/18.
Using this information we can estimate the trended ultimate loss and ALAE.

Loss & Trended


Policy Incurred Loss and ALAE Development Factors Trend ALAE Ultimate Loss &
Year BI PD BI PD Period Trend Factor ALAE
(1) (2) (3) (4) (5) (6) (7)
7/1/12-13 $4,606,763 $1,565,405 1.12 1.01 6 1.2653 $8,529,052
7/1/13-14 $3,515,883 $1,479,748 1.28 1.08 5 1.2167 $7,419,707
7/1/14-15 $3,391,278 $1,294,040 1.49 1.17 4 1.1699 $7,682,497
7/1/15-16 $3,388,863 $1,557,960 1.82 1.32 3 1.1249 $9,251,149
7/1/16-17 $2,982,530 $1,421,673 2.05 1.48 2 1.0816 $8,888,873
Total $17,885,317 $7,318,826 $41,771,278
(6) = (1.0+ Loss Trend %)^(5)
(7) = [ (1) x (3) + (2) x (4) ] x (6)

We are also given the following composite exposure (sales receipts), which we will trend to the prospective
period using a selected exposure trend of 3.0%.

Exposure
Policy Total Receipts Trend Trend Trended
Year ($000's) Period Factor Exposure
(1) (2) (3) (4)
7/1/12-13 $205,656 6 1.1941 $245,564
7/1/13-14 $213,177 5 1.1593 $247,131
7/1/14-15 $222,746 4 1.1255 $250,703
7/1/15-16 $230,212 3 1.0927 $251,559
7/1/16-17 $239,228 2 1.0609 $253,797
Total $1,111,019 $1,248,753
(1) = Given
(3) = (1.0+ Exposure Trend %)^(2)
(4) = (1) x (3)

Given an expected loss and ALAE ratio of 73.50% we have all of the information needed to calculate the
composite rate. We first divide the trended ultimate loss & ALAE by the expected loss & ALAE ratio to
estimate the adjusted premium for the total risk. We can then divide by the trended composite exposure to
estimate the composite rate (per thousand dollars of sales) for the prospective period.

(1) Trended Ultimate Loss & ALAE $41,771,278 = (7) Total First Table
(2) Expected Loss & ALAE Ratio 73.50% = Given
(3) Adjusted Premium $56,831,671 = (1) / (2)
(4) Trended Composite Exposure $1,248,753 = (4) Total Second Table
(5) Composite Rate $45.51 = (3) / (4)

We use this composite rate to find the deposit premium (premium paid up-front for a policy which we don’t
know the true exposure for until after the policy is complete). If for example, the risk being rated was expected
to have $248,228,000 in sales the deposit premium would be $11,296,856 ($248,228 ∗ $45.51). If the audited
sales at the end of the period end up being $253,192,000 instead, the final premium would be $11,522,768
($253,192 ∗ $45.51). The insurance company would bill the insured for the additional $225,912 in premium at
that time.

Spring 2020 © James Bedford, FCAS Page 246


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

Large deductible policies

While the main purpose of small deductibles was to reduce premium by avoiding expenses associated with
small nuisance claims, the main purpose of commercial large (dollar) deductible (LDD) policies is for the
insured to retain a large portion of risk in exchange for lower premiums (if loss experience is good the insured
will save money, but if experience is bad they will end up paying more).

Pricing LDD policies is similar to pricing regular deductible policies with the following differences that must be
accounted for:

1) Claims handling: Most often the insurance company will handle all claims, though sometimes the
insured will be responsible for claims handling below the deductible. If the insurer handles claims they
must charge for the claims handling expenses in the deductible layer. If the insured company is
responsible for the claims handling in the deductible layer the insurance company doesn’t charge for
claims handled expenses in the deductible layer but may charge an additional fee to cover any increase
in expected losses due to claims being handled by a less experienced/knowledgeable entity.
2) Application of the deductible: the deductible may apply to loss and ALAE or loss alone—the loss
elimination ratio must be calculated on the basis applicable to the policy.
3) Deductible processing: most often the insurance company will pay out all claims then seek
reimbursement for the claims in the deductible layer from the insureds. The insurance company will
charge for the expenses incurred in billing and monitoring deductible reimbursements, as well as the
credit risk they are taking on (i.e., the risk that the insured becomes unable to pay the deductible).
4) Risk margin: the losses the insurance company is covering above the deductible are more volatile and
difficult to predict than losses below the deductible—the insurer may charge an additional risk margin
to account for this additional risk undertaken.

The formula to find the premium for LDD policies is very similar to the standard pure premium formula, but
losses, ALAE, and fixed expenses are adjusted to the level expected to be paid under the LDD policy, and there
are additional LDD specific costs:

+ + + + +
1.0 − −

We can find the losses above the deductible as the product of total losses and the excess loss ratio (1.0 minus
the loss elimination ratio at the given limit). The equation above assumes ALAE is not subject to the
deductible—if it was we would also multiply it by the excess ratio.

Assume we want to price a LDD policy given the following information:

- The deductible is $250,000 per occurrence applying to losses only.


- ALAE is estimated to be 15% of total losses (% of total losses since the deductible does not reduce
claims handling expenses below the deductible).
- Ground-up total losses are expected to be $1,500,000.
- Fixed expenses are assumed to be $45,000 while variable expenses are assumed to be 9% of premium.
- The insurer makes all loss payments and seeks reimbursement from the insured. Deductible
processing fees are estimated to be 4.5% of deductible losses. A 1.5% risk margin is also charged on
deductible losses.
- The standard profit margin is 4%, but an additional risk margin of 8% of excess losses is charged as
well.
- The percentage of losses below the deductible (LER) and above the deductible (excess ratio) are given
in the following table.

Spring 2020 © James Bedford, FCAS Page 247


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

Excess Ratio
Loss Limit LER [1.0-LER]
$100,000 62.0% 38.0%
$250,000 81.5% 18.5%
$500,000 95.0% 5.0%
$750,000 97.5% 2.5%

We can calculate the losses covered by the policy (Row 4) as the expected ground up losses times the applicable
excess ratio. ALAE is 15% of ground up losses (Row 5) while (standard) fixed expenses are given as $45,000.
Deductible processing and credit risk charges are a function of losses below the deductible (ground-up losses
times the LER) and calculated in Rows (7) and (8). Risk margin is a function of losses excess the deductible,
calculated in Row (9). The total loss and expense, shown in Row (10), is the sum of Rows (4) – (9). The variable
expense and profit margin are given, and total 13%. The indicated premium is then the total loss and expense
divided by one minus the variable expense and profit.
(1) Expected Total Ground-up Losses $1,500,000 = Given
(2) Loss Elimination Ratio 81.5% = Looked up
(3) Excess Ratio 18.5% = Looked up
(4) Estimated Losses Above the Deductible $277,500 = (1) * (3)
(5) ALAE $225,000 = (1) * 15%
(6) Fixed Expenses (Standard) $45,000 = Given
(7) Deductible Processing $55,013 = (1) * (2) * 4.5%
(8) Credit Risk $18,338 = (1) * (2) * 1.5%
(9) Risk Margin $22,200 = (4) * 8%
(10) Total Loss and Expense $643,050 = Sum [(4) - (9)]
(11) Variable Expenses and Profit 13% = 9% + 4%
(12) Premium $739,138 = (10) / [1.0 - (11)]

Retrospective rating plans

Retro rating is an optional rating plan that bases premiums on actual losses incurred during the policy period;
subject to minimum and maximum premiums. The minimum and maximum premiums will be selected by the
insured and will depend on the amount of risk they wish to retain. Retrospective policies can be thought of as
a type of self-insurance plan, since the actual premium for the policy will reflect the amount of actual loss during
the period (limited to the minimum and maximum premium).

Rather than the insured paying for losses directly, a retro policy can be thought of as the insured being charged
premium for losses between the minimum and maximum, while essential being insured for losses above the
maximum (i.e., they aren’t charged additional premium) while agreeing to pay some minimum premium (i.e.,
the premium paid even if they experience no losses during the period). The net insurance charge is then the
charge for the maximum premium minus the savings from the minimum premium.

Under a retro plan the insured will pay premium up front followed by premium adjustments made at defined
times to reflect the actual loss experience under the policy. The number of adjustments made can be pre-
defined or made until the insure and insured agree to finalize the policy. The upfront premium may be an
estimate of the full retro premium, an estimate of the full retro premium excluding loss adjustment expenses,
or an estimate of the full retro premium excluding loss and loss adjustment expenses. Ultimately the same full
retro premium will be charged, the differences between these three scenarios is simply cash flow timing.

We will now look at a sample U.S. workers compensation retrospective rating plan. Note that there are several
simplifications from what you will see in practice because they are beyond the scope of the exam (e.g., assuming
the occurrence limitation is included in the insurance charge, in addition to the aggregate charge). The formula
for the retro premium is:

=( + ) , ℎ ≤ ≤

Spring 2020 © James Bedford, FCAS Page 248


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

and:
R = The retrospective premium
b = The basic premium
c = The loss conversion factor; covers expenses that are proportional to loss (e.g., LAE)
L = Reported losses limited to the selected accident limit (if applicable)
T = The tax multiplier

The converted losses (cL in the retro formula) are the reported losses limited to the selected accident limit (if
applicable) multiplied by the loss conversion factor. The loss conversion factor adds any expenses that are
variable with loss (e.g., any ALAE not already included with loss, plus ULAE). The loss conversion factor is
negotiated between the insured and insurer.

Further, the basic premium is found as:

= ∗[ − ℎ ℎ + ℎ ]

where:
e = Expense allowance (expense percentage of a standard policy)
E = Expected loss ratio
P = Standard premium

= ∗ [ − ( − 1) + ( ℎ − )∗ ]

The basic premium covers fixed expenses (those not covered through the loss conversion factor), profit, and
the net insurance charge (and its associated variable expenses). Since the retro premium formula charges for
the expenses that vary with loss (cL), the expected value of these expenses are removed from the basic premium
by subtracting (c-1)E (i.e., we subtract one since c is in factor form and we don’t want to subtract the expected
loss amount since it is not in b to begin with). We can also include the net insurance charge in the basic premium
since it is determined by expected value and thus known at inception.

In the retro premium formula, b charges for expenses not covered by the LCF, plus the net insurance charge
(cost of maximum minus minimum premium) while cL charges for actual losses plus the expenses proportional
to loss— all subject to the minimum and maximum (which is charged for in b).

Given the following information we can calculate the retro premium:

(1) Minimum retrospective premium ratio (negotiated) 50.00% Min


(2) Maximum retrospective premium ratio (negotiated) 150.00% Max
(3) Loss Conversion Factor (negotiated) 1.15 c
(4) Per Accident Loss Limitation (negotiated) $125,000
(5) Expense Allowance (excludes tax multiplier) 22% e
(6) Expected Loss Ratio 68% E
(7) Tax Multiplier 1.06 T
(8) Standard Premium $859,145 P
(9) Insurance Charge for Maximum Premium 0.39 Ins. Charge
(10) Insurance Savings for Minimum Premium 0.08 Ins. Savings
(11) Reported Losses (subject to accident limit) $215,500 L

We can find the basic premium using the formula given earlier. We then find the converted losses as the
reported losses times the loss conversion factor. The preliminary retro premium is then the basic premium
plus converted losses times the tax multiplier. We call this the preliminary retro premium because we must
check that is falls between the agreed upon minimum and maximum premium calculated in Rows (16) and
(17), which ours does.

Spring 2020 © James Bedford, FCAS Page 249


Basic Ratemaking Chapter 15: Commercial Lines Rating Mechanisms

(12) Net Insurance Charge 0.24242 = [(9) - (10)] x (6) x (3)


(13) Basic Premium $ 309,653 = [(5)-(6) x [(3)-1.0 ]+(12)] x (8)
(14) Converted Losses $247,825 = (11) x (3)
(15) Preliminary Retrospective Premium $590,927 = [(13) + (14)] x (7)
(16) Minimum Retrospective Premium $ 429,573 = (1) x (8)
(17) Maximum Retrospective Premium $ 1,288,718 = (2) x (8)
(18) Retrospective Premium $590,927 = Min [Max [(15), (16)], (17)]

On the exam be sure to check the minimum and maximum (and be sure to document that you have, either
through formula or saying that you have)—it can be tempting to stop once you have calculated the preliminary
retro premium, but exam problems may be subject to the minimum or maximum and you can lose points if you
don’t check, even if they don’t apply.

Spring 2020 © James Bedford, FCAS Page 250


Basic Ratemaking Chapter 16: Claims-Made Ratemaking

Basic Ratemaking

Chapter 16 – Claims-Made Ratemaking


Introduction

Long tailed liability lines of business are extremely susceptible to unexpected inflation (loss trend). For some
lines of business it isn’t uncommon for report lags (time between the occurrence and reporting of the claim) to
be many years, followed by settlement lags (time between the reporting of the claim and closure of the claim)
to be many years. If inflation is higher than expected it may take many years before the insurance company
realizes their long-tailed products were significantly underpriced. This occurred in the U.S. market in the 1960-
70’s, highlighting the significant pricing risk present in these lines of business, and leading to the advent of
claims-made insurance policies.

Claims-made policies cover claims that are reported during the coverage period, unlike occurrence
(traditional) policies that cover claims occurring during the coverage period. This reduces pricing risk since at
the end of the policy period we are aware of all claims that will be covered. In an occurrence policy there is
report lag (corresponding to pure IBNR) and settlement lag (corresponding to IBNER/case development).
Under a claims-made policy our cohort of covered claims is locked at the end of the period so there is no pure
IBNR (unreported claims), only IBNER (development on known case reserves). When pricing an occurrence
policy we must estimate the ultimate value of losses that may be reported many years into the future whereas
with a claims-made policy we only need to estimate ultimate losses on claims reported during the period. There
is still settlement lag, which can still be large, but overall the pricing risk is reduced under a claims-made policy
as it eliminates the reporting lag. In summary, claims-made policies eliminate the report lag period, effectively
making the policy shorter tailed, thus significantly reducing the pricing risk as compared to an equivalent
occurrence policy.

To illustrate these differences we will use the following table that shows claims by report year and report year
lag— this lag is from occurrence to report (i.e., this is not a development triangle, a RY development triangle
would use lag between report period and payment period). I prefer the name “Lag in Reporting” over “Report
Year Lag” as to avoid confusion, but I have left the latter since that is the name that would be used on the exam.
This example goes out five years—in practice we would likely consider more years. Let’s look at what a couple
example cells represent:

· The first cell, L(2013,0), represents a claim that is reported in 2013 and occurred in 2013 (i.e., a 0 year
lag between occurrence and report).
· The cell L(2014,1) represents a claim that is reported in 2014 and occurred in 2013 (i.e., a 1 year lag
between occurrence and report).
· The cell L(2015,2) represents a claim that is reported in 2015 and occurred in 2013 (i.e., a 2 year lag
between occurrence and report).

Report Year Lag


0 1 2 3 4
2013 L(2013,0) L(2013,1) L(2013,2) L(2013,3) L(2013,4)
Report Year

2014 L(2014,0) L(2014,1) L(2014,2) L(2014,3) L(2014,4)


2015 L(2015,0) L(2015,1) L(2015,2) L(2015,3) L(2015,4)
2016 L(2016,0) L(2016,1) L(2016,2) L(2016,3) L(2016,4)
2017 L(2017,0) L(2017,1) L(2017,2) L(2017,3) L(2017,4)
2018 L(2018,0) L(2018,1) L(2018,2) L(2018,3) L(2018,4)

Claims reported in the same year fall within the same row, while claims occurring in the same year all fit within
the same diagonal (from top left to bottom right). For example, report year 2015 consists of claims reported
in 2015 occurring in 2015, 2014, 2013, 2012, and 2011—circled in red in the following table. Accident year
2014 will include claims reported in 2014 and occurring in 2014 L(2014,0), reported in 2015 occurring in 2014

Spring 2020 © James Bedford, FCAS Page 251


Basic Ratemaking Chapter 16: Claims-Made Ratemaking

L(2015,1), reported in 2016 occurring in 2014 L(2016,2), reported in 2017 occurring in 2014 L(2017,3), and
reported in 2018 occurring in 2014 L(2018,4)—circled in blue in the following table.

Report Year Lag


0 1 2 3 4
2013 L(2013,0) L(2013,1) L(2013,2) L(2013,3) L(2013,4)
Report Year 2014 L(2014,0) L(2014,1) L(2014,2) L(2014,3) L(2014,4)
2015 L(2015,0) L(2015,1) L(2015,2) L(2015,3) L(2015,4)
2016 L(2016,0) L(2016,1) L(2016,2) L(2016,3) L(2016,4)
2017 L(2017,0) L(2017,1) L(2017,2) L(2017,3) L(2017,4)
2018 L(2018,0) L(2018,1) L(2018,2) L(2018,3) L(2018,4)
= AY 2014

= RY 2015

Five principles of claims-made policies

There are five principles of claims-made policies which give more insight into how pricing risk is reduced. To
illustrate these principles we will introduce an example with the following characteristics:

· Exposure levels are constant while loss costs increase by 4% each report year.
· The average loss cost for Report Year 2013 is $2,500.
· An equal number of incurred claims are reported each year and all claims are reported within five
years of occurrence.
· Loss costs do not vary by report year lag—any trends affecting settlement lag have been ignored.

Relaxing these simple assumptions does not change the conclusions we will see, is just makes interpreting them
more difficult. Using the given assumptions we can create the following table:

Report Year Lag Claims-made Occurance


0 1 2 3 4 Loss Costs Loss Costs
2013 $500.00 $500.00 $500.00 $500.00 $500.00 $2,500.00 2013 $2,708.16
2014 $520.00 $520.00 $520.00 $520.00 $520.00 $2,600.00 2014 $2,816.49
2015 $540.80 $540.80 $540.80 $540.80 $540.80 $2,704.00 2015 $2,929.15
Accident Year
Report Year

2016 $562.43 $562.43 $562.43 $562.43 $562.43 $2,812.16 2016 $3,046.31


2017 $584.93 $584.93 $584.93 $584.93 $584.93 $2,924.65 2017 $3,168.17
2018 $608.33 $608.33 $608.33 $608.33 $608.33 $3,041.63 2018
2019 $632.66 $632.66 $632.66 $632.66 $632.66 $3,163.30 2019
2020 $657.97 $657.97 $657.97 $657.97 $657.97 $3,289.83 2020
2021 $684.28 $684.28 $684.28 $684.28 $684.28 $3,421.42 2021

Principle 1

“A claims-made policy should always cost less than an occurrence policy as long as claim costs are increasing.”

An occurrence policy will be longer tailed than a claims-made policy. Therefore, if claim costs are increasing,
the occurrence policy will cost more due to the higher cost future claims. This is the case on our example
above—each RY costs less than the corresponding AY due to the increasing claims costs.

Spring 2020 © James Bedford, FCAS Page 252


Basic Ratemaking Chapter 16: Claims-Made Ratemaking

Principle 2

“If there is a sudden, unpredictable change in the underlying trends, the claims-made policy priced based on
the prior trend will be closer to the correct price than an occurrence policy based on the prior trend.”

Since an occurrence policy is longer tailed than a claims-made policy it is more susceptible to unexpected
changes in trends. Under a claims-made policy we are estimating the ultimate costs of claims reported during
the period (occurring in the same and in prior periods). Under an occurrence policy we are estimating the
ultimate costs of claims that occur during the period but may not be reported for many years. Each will have
settlement lag—the time after the claim is reported and prior to its closing, during which intermediate
payments may be made (e.g., long term scheduled indemnity payments) or court proceedings may take place.

The following table shows the results using an 8% trend rather than 4%. We can see the AY (occurrence based)
estimates are more effected than the RY (claims-made based) estimates are. For example, the RY 2014 loss cost
increased 3.85% ($2,700/$2,600 - 1.0) while the AY 2014 loss cost increased 12.48% ($3,167.96/$2,816.49 –
1.0) compared to our beginning state using 4.0%.

Report Year Lag Claims-made Occurance


0 1 2 3 4 Loss Costs Loss Costs
2013 $500.00 $500.00 $500.00 $500.00 $500.00 $2,500.00 2013 $2,933.30
2014 $540.00 $540.00 $540.00 $540.00 $540.00 $2,700.00 2014 $3,167.96
2015 $583.20 $583.20 $583.20 $583.20 $583.20 $2,916.00 2015 $3,421.40

Accident Year
Report Year

2016 $629.86 $629.86 $629.86 $629.86 $629.86 $3,149.28 2016 $3,695.11


2017 $680.24 $680.24 $680.24 $680.24 $680.24 $3,401.22 2017 $3,990.72
2018 $734.66 $734.66 $734.66 $734.66 $734.66 $3,673.32 2018
2019 $793.44 $793.44 $793.44 $793.44 $793.44 $3,967.19 2019
2020 $856.91 $856.91 $856.91 $856.91 $856.91 $4,284.56 2020
2021 $925.47 $925.47 $925.47 $925.47 $925.47 $4,627.33 2021

Principle 3

“If there is a sudden, unexpected shift in the reporting pattern, the cost of a mature claims-made policy will be
affected relatively little, if at all, relative to the occurrence policy.”

Our example assumed 20% of claims are reported each year. If we instead assumed 12% were reported in the
same year (0 lag), 16% with a 1-year lag, 20% with a 2-year lag, 24% with a 3-year lag, and 28% with a 4-year
lag our table would look like the following:

Report Year Lag Claims-made Occurance


0 1 2 3 4 Loss Costs Loss Costs
2013 $300.00 $400.00 $500.00 $600.00 $700.00 $2,500.00 2013 $2,750.62
2014 $312.00 $416.00 $520.00 $624.00 $728.00 $2,600.00 2014 $2,860.64
2015 $324.48 $432.64 $540.80 $648.96 $757.12 $2,704.00 2015 $2,975.07
Accident Year
Report Year

2016 $337.46 $449.95 $562.43 $674.92 $787.40 $2,812.16 2016 $3,094.07


2017 $350.96 $467.94 $584.93 $701.92 $818.90 $2,924.65 2017 $3,217.84
2018 $365.00 $486.66 $608.33 $729.99 $851.66 $3,041.63 2018
2019 $379.60 $506.13 $632.66 $759.19 $885.72 $3,163.30 2019
2020 $394.78 $526.37 $657.97 $789.56 $921.15 $3,289.83 2020
2021 $410.57 $547.43 $684.28 $821.14 $958.00 $3,421.42 2021

As we can see, the RY estimates are not affected while the AY estimates have increased. This is because claims-
made policies are not affected by report lag while occurrence policies are (i.e., if claims are reported earlier or
later, and costs are changing, this will affect our occurrence estimates).

Spring 2020 © James Bedford, FCAS Page 253


Basic Ratemaking Chapter 16: Claims-Made Ratemaking

Principle 4

“Claims-made policies incur no liability for IBNR, so the risk of reserve inadequacy is greatly reduced.”

By definition, claims-made policies do not have pure IBNR, only IBNER (case development), therefore it is
easier to estimate ultimate losses needed for pricing because we are only estimating ultimate losses for claims
reported in the period, not many periods to follow. This also reduces the risk of inadequate reserves.

Principle 5

“The investment income earned from claims-made policies is substantially less than under occurrence
policies.”

As we have seen, occurrence policies are longer tailed since they have both report lag and settlement lag while
claims-made policies only have settlement lag. Since claims-made loss payments are shorter tailed than
occurrence policies, we will have less time to invest premiums before paying out losses, therefore we will earn
less investment income.

Coordinating coverage

Since occurrence and claims-made policies cover different cohorts of losses, switching from one to the other
can cause coverage overlaps or gaps. Take the following example where an insured had purchased occurrence
policies through 2013 but began purchasing claims-made policies in 2014. Claims occurring 2013 and prior,
but reported 2014 and subsequent would be covered by both an occurrence policy and claims-made policy, as
depicted in the following table (the blue area is covered by occurrence policies while the red circles represent
the claims-made policies). As we can see, there is significant overlap between the policies.

Report Year Lag


0 1 2 3 4
2013 L(2013,0) L(2013,1) L(2013,2) L(2013,3) L(2013,4)
Report Year

2014 L(2014,0) L(2014,1) L(2014,2) L(2014,3) L(2014,4)


2015 L(2015,0) L(2015,1) L(2015,2) L(2015,3) L(2015,4)
2016 L(2016,0) L(2016,1) L(2016,2) L(2016,3) L(2016,4)
2017 L(2017,0) L(2017,1) L(2017,2) L(2017,3) L(2017,4)
2018 L(2018,0) L(2018,1) L(2018,2) L(2018,3) L(2018,4)

To rectify this situation, claims-made policies can have a retroactive date. With a retroactive date, a claims-
made policy will cover losses reported during the period but occurring on or after the retro-active date—this
date should coordinate with the end of the last occurrence policy. This feature is used when switching from
an occurrence policy to a claims-made policy. Using a retroactive date of 1/1/2014 the coverage would be
represented by the following table:

Report Year Lag


0 1 2 3 4
2013 L(2013,0) L(2013,1) L(2013,2) L(2013,3) L(2013,4)
Report Year

2014 L(2014,0) L(2014,1) L(2014,2) L(2014,3) L(2014,4) = 1st Yr. C.M.


2015 L(2015,0) L(2015,1) L(2015,2) L(2015,3) L(2015,4) = 2nd Yr. C.M.
2016 L(2016,0) L(2016,1) L(2016,2) L(2016,3) L(2016,4) = 3rd Yr. C.M.
2017 L(2017,0) L(2017,1) L(2017,2) L(2017,3) L(2017,4) = 4th Yr. C.M.
2018 L(2018,0) L(2018,1) L(2018,2) L(2018,3) L(2018,4) = Mature C.M.

The 2014 claims-made policy only covers claims reported in the period that also occurred in the period (those
reported in the period occurring prior to the period would be covered by the prior occurrence polices)—this
is called a first-year claims-made policy. The 2015 claims-made policy would similarly cover claims reported
in 2015 but occurring on or after 1/1/2014—this would be a second-year claims made policy. In the same way,

Spring 2020 © James Bedford, FCAS Page 254


Basic Ratemaking Chapter 16: Claims-Made Ratemaking

the 2016 claims-made policy is called a third-year claims-made policy, while the 2017 policy would be a fourth-
year claims-made policy. Assuming all claims are reported within five years, as our example does, the 2018
policy would be a mature claims-made policy—meaning it is not subject to a retroactive date.

The non-mature claims made polices are rated using step factors. A step factor is the percentage of the mature
claims-made rate. For example, the step factors in the following table would imply that 39% of the costs of a
mature claims-made policy come from claims that occurred in the year they were reported, while 68% came
from claims that occurred during the year and the prior, and so on. Using the values below we would price a
third-year claims-made policy at 83% of the mature claims-made rate.

Claims-Made
Year Step Factor
First 39%
Second 68%
Third 83%
Fourth 94%
Fifth or More 100%

We looked at a situation where an insured switched from an occurrence policy to a claims-made policy. We will
now look at an example of switching from a claims-made policy to an occurrence policy. The insured had a
claims-made policy in 2013 (red circle) and switched to an occurrence policy in 2014 and beyond (blue shade).
This results in a coverage gap for claims incurred 2013 and prior but reported after 2013 (i.e., incurred but not
reported by the end of the last claims-made policy), represented by the green highlights.

Report Year Lag


0 1 2 3 4
2013 L(2013,0) L(2013,1) L(2013,2) L(2013,3) L(2013,4)
Report Year

2014 L(2014,0) L(2014,1) L(2014,2) L(2014,3) L(2014,4)


2015 L(2015,0) L(2015,1) L(2015,2) L(2015,3) L(2015,4)
2016 L(2016,0) L(2016,1) L(2016,2) L(2016,3) L(2016,4)
2017 L(2017,0) L(2017,1) L(2017,2) L(2017,3) L(2017,4)
2018 L(2018,0) L(2018,1) L(2018,2) L(2018,3) L(2018,4)

To address this issue, insurance companies offer claims-made tail coverage (extended reporting endorsement)
that covers claims that were incurred, but not reported, prior to the end of the last claims-made policy—in this
example claims-made tail coverage would cover the cells highlighted in green. Claims made tail coverage is also
purchased in the case of retirement. For example, a retiring surgeon wouldn’t need further occurrence
coverage, but without claims-made tail coverage (s)he would still be exposed to claims that occurred during
their career but were not reported before the end of their last full claims-made policy.

The pricing of claims-made policies is beyond the scope of the text but would be done using similar techniques
to those discussed throughout the text. To estimate ultimate RY claims we would use a RY development
triangle, not the tables used for the examples in this chapter (i.e., we would use lag between report period and
payment period, not report period and occurrence period)

Spring 2020 © James Bedford, FCAS Page 255

You might also like