You are on page 1of 5

Product Design & Requirements – Question #1

Project: ‘Simulation Tool’


Last updated - Adam W. @ 8:39am (EST) July 27, 2021

Minimum Viable Product (‘MVP’)


Build a dashboard where actuaries can select a given algorithmic underwriting program and run a
sensitivity analysis; comparing the key output metrics generated using our current ‘base-case’ inputs (data
sources, rules, program parameters, etc.), to those generated from a given change to these inputs.
Who’s it for?
1. Actuaries – those responsible for analyzing statistics and using them to calculate risk metrics
2. Distribution – those responsible for providing technical guidance, information, and procedural advice to a
variety of customer service issues. Work directly with field agents on understanding our platform and
business, and to act on behalf of promoting it
3. Other stakeholders – other members of Haven Life who require specific knowledge concerning our
underwriting program(s)
Core Features
Requirements
• User must be able to select a specific algorithmic underwriting program
• User must be able run simulation using current ‘base-case’ algorithmic model against a predefined
population data set
• User must be able to view relevant output metrics produced by running ‘simulation’
• User must be able to enter a change in the program parameters
• User must be able to enter a change in the data source
• User must be able to enter a change in the rule’s engine supporting this underwriting program
(sequence of rules, content of rules, parameters of rules)
• User must be able to run simulation with any changes to original inputs, against predefined population
data set
• User must be able to see the delta in output metrics produced using ‘base-case’ inputs vs. those
produced using any change to the inputs
Context and rationale
• To better help actuaries, distribution, and other stakeholders understand how changes to a given
algorithmic underwriting program will operate at scale
• Low risk of technological failure – Any changes made to a given algorithmic underwriting program
will be in the context of the simulation tool. There will be no downstream dependencies that could
risk the functionality of our existing code base
• Low cost of failure – Should any unexpected complexities arise that prevent us from completing this
project, they will be quickly identified. No risk of customer dissatisfaction
Release Requirements
Performance – low / Scalability – low / Reliability – high / Usability - med / Supportability – low /
Localizability – med
Timeline
Target release window / Project milestones / Release Dependencies – known factors (beyond release
criteria) that may affect release
Product Design & Requirements - Question #2
Product Backlog
Last update: Adam W. @ 10:15pm July 26, 2021

INITIATIVE → Build a simulation tool for internal use High Priority


1 **Story points are a unit measure
2 of complexity of a given feature,
Develop simple visual representation of our current underwriting 3
EPIC → as opposed to measuring in
models run against a predetermined population data-set 4
estimated time
5
Allow for sensitivity testing to be done, by letting users change the

Low Priority
(given that complex problems can be
difficult to estimate in hrs, using an
EPIC → value of an input(s) and see the delta in key output metrics
alternative metric allievaites the
associated with the change Status
pressure of estimating these tasks
1 Done
accurately. Allow the team to go around
Better understand how changes to our existing algorithmic 2 In Progress
EPIC → and vote on complexity, and start from
underwriting programs will operate at scale
3 Not Started there...)

Item User Story Story Points Priority Status

1 As an actuary, I need to be able to select a a specific algorithmic underwriting program, so that I know what I'm analyzing 1 1 1

2 As an actuary, I need to be able to run a simulation using our 'base-case' inputs against a predetermined population data-set 2 2 1

As an actuary, I need to see the key output metics associated with the simulation (Risk capital, Net premiums, Profitability metrics, etc), to see how
3 2 2 1
these changes would operate at scale

4 As a user, I need to be able to change a program parameter, run the simulation, and see the detla in 'Key Output Metrics' vs our current model 3 3 2

5 As a user, I need to be able to make a rule-engine change, run the simulation, and see the detla in 'Key Output Metrics' from our current model 3 3 2

6 As a user, I need to be able to make a data source change, run the simulatiton, and see the detla in 'Key Output Metrics' from our current model 3 3 2

As a Haven Life business stakeholder, it's important for me to know how changes in our algorithmic underwriting programs could effect bottom-
7 2 3 3
line numbers, so I can communicate this to the management team

As a member of the distribution team, I need to understand our algorithmic underwriting models front and back, to proactively manage
8 2 3 3
relationships with field agents and internal partners alike
As a new member to the Haven Life Underwriting team, it would be beneficial to learn how our algorithmic underwriting programs work, by
9 2 3 3
examining and learning from the inputs

10 ….
Product Design & Requirements – Question #2 (pg. 2)

User Story:
“As an actuary, I need to be able to change a program parameter, run the simulation, and see the
delta in ‘Key Output Metrics’ vs those generated from our current underwriting program inputs”

Definition of Done (DoD)


à Product owner (or equivalent) accepts User Story
à All code is reviewed
à Tests written and passing
à Functional tests passed
à Unit tests passed
à Regression tests passed
à Integrated into clean build
à Non-functional requirements met
à Acceptance Criteria met
à Thorough documentation of functionality

Acceptance Criteria
à User can change a program parameter
à Simulation is run with new algorithmic underwriting program parameters, against pre-
defined dataset
à User is presented with the ‘Key Output Metrics’ given their updated inputs
à User is presented with the delta in ‘Key Output Metrics’ vs the underlying underwriting
algorithm ‘base-case’ inputs
à User can reset the tool
Product Design & Requirements – Question #3

What 3 additional features would you include on the 6-month roadmap? Why?
1. Build an excel plug-in / download option à This feature would allow you to open excel and
run the simulation tool within a spreadsheet. You could then run multiple simulations at once (in
various separate sheets)
Why à The primary benefit of this feature is that it will allow actuaries and other internal
stakeholders to run more sophisticated simulations. They will be able to curate output and run
secondary analysis on that data, all within excel. Not to mention, users will be able to save
reports locally. This will really leverage the original value proposition: to better help various
stakeholders understand how changes to algorithmic underwriting models will operate at scale
2. Have a customizable user interface à Given that there will be different teams using this tool,
and various roles within those teams, it should be a medium-term goal to be able to customize
how the ‘dashboard’ or ‘tool’ appears to you locally, and have those settings persist
Why à Various teams / individuals will have different purposes for using this tool. It will
be beneficial to the greatest number of users if we can provide the ability to customize your own
personal settings, and/or ‘Team’ settings
3. Include the functionality to not only compare the sensitivity of a given input (or set of
inputs), to that of an underwriting programs ‘base-case’ inputs, but to also add the axis of
time à Not only will the users of this simulation tool be able to compare the key output metrics
from various sensitivity analysis, but they will also be able to see that sensitivity over given time
horizons (i.e., 1 wk. ago,, 1mo ago, 1 yr. ago, 10 yrs. ago, etc.)
Why à The purpose of adding this functionality would be to increase the subject matter
experts understanding of potential changes to our programs at scale. Both to better the product,
and to increase the knowledge base throughout the organization

What problems or limitations do you think we should consider at launch?


The problems / limitations that come to mind when launching a product like this would be as
follows:
1. Given the number of inputs that go into our algorithmic underwriting programs, and the user’s
ability to alter those inputs, I worry that adequately testing all the various scenarios will prove
difficult. I‘d have a close eye on the testing process well in advance of launch to mitigate that risk
2. I worry that measuring the success of this product could be problematic. The value proposition
behind this tool is to aide in the understanding of our underwriting models, and how changes to
them will operate at scale, to our various stakeholders. How can we measure the extent to which
somebody has learned something? There are certainly ways, it’s just not as cut and dry as reaching
a certain target sales #, for example
3. At launch, I believe we should be prepared to accept that the tool will likely only be available to
actuaries at the start. Given the nature of having an MVP and following an agile framework, is to
deliver the most basic functionality to your core users at the earliest possible time and iterate
through feedback cycles & deployments. A potential bottleneck in this project is having a tool
that is digestible to both the actuaries who will be using it most, in making the decisions required
of their roles, as well as to the distribution teams and various other stakeholders
SaaS Strategy – Questions

Different insurance products often have different user applications and risk
evaluation. How might you enhance your tool to easily support new products in a
SAAS context?
One of the primary benefits to a SaaS platform is the ability to constantly iterate and improve
your product / service across multiple tenants, over very short periods of time. These ‘shortened’ life
cycles allow internal teams to build out features that can then be deployed across all clients.
In the context of this product, our distribution teams would need to focus on contextual
onboarding. When a new client is using our software, we need to do our best to understand their specific
needs. In this case, what types of user applications is the client trying to support? What types of risk
evaluation are they using? From there, we can go to the drawing board and determine if we can support
the new product.

How would you roadmap the future of this product over the course of the next 2-3 years?
In other words, what would be the “north star” of this product?
The “North Star” of this project, in my opinion, would be the pursuit towards creating value for our
clients using an extremely user-friendly simulation tool focused on algorithmic underwriting programs.
After a successful internal launch and the continued buy-in from internal stakeholders, the
primary focus would be to get as many people as possible using the tool and getting feedback from
those users to analyze and determine what future feature additions provide the best risk / reward
characteristics.

As you pursue this “north star,” what kind of substantive technical challenges would you
foresee needing to be addressed? Please be specific. How would you balance those with
providing short-term feature development for your customers?
By increasing the active user base of the tool, we will inevitably run into some bugs along the
way. As far as substantive technical challenges that need to be addressed in the future, what
immediately comes to mind is the technical compatibility with other providers.
As far as balancing those technical challenges with providing short-term feature development for
our clients, I’ll always go back to the vision and strategy of the company. Asking myself questions like:

1. Does this new short-term feature addition move the needle in the direction we’re trying to go?
2. Can any aspects of these short-term feature additions be leveraged down the line?
3. Are there any aspects of these short-term feature-adds that will make it much more difficult to
continue achieving our vision?

You might also like