You are on page 1of 2

1.

What would you advise Jaffer regarding the performance of the new data

science algorithm?

Based on the information provided in the case, I would advise Jaffer of a few key points regarding the
performance of the new data science algorithm (Condition B):

 On average, B has outperformed A over the 30 days of testing, with a higher daily effective RPM
($0.131 per day higher on average). This indicates B is more effective at generating revenue.
 However, 30 days may not be enough time to conclusively determine if B is the superior
algorithm. I would recommend continuing the test for a longer period (e.g. 60-90 days) to gather
more data.
 Need to determine if the RPM increase from B is statistically significant, not just due to random
chance. A statistical significance test should be conducted.
 If the RPM increase holds and is statistically significant over a longer period, can calculate the
potential financial impact of switching to B. With Vungle's scale, even a small RPM increase
could mean millions in extra revenue.
 Before switching completely to B, would likely want to slowly ramp up the percentage of traffic
going to B to monitor for any issues arising from scale.
 If B continues to perform better, it validates the value of data science algorithms for ad
optimization and supports further investment in Guerin's data science team.

In summary, B shows early promise but additional testing is advised before fully switching over. But if
B's strong performance sustains, it could have a very material financial impact for Vungle.

2. Which assumptions underline your analysis?

1. The performance data presented (impressions, clicks, installs, revenue) is accurate and

sufficiently captured the performance of both algorithms during the testing period.

2. The 1/16th split between algorithm B and A is sufficient to represent the comparative
performance of the two algorithms.
3. The daily eRPM metric accurately captures the key driver of revenue performance for Vungle
and is based on valid revenue data from advertisers.
4. Algorithm B was evaluated across a representative sample of mobile traffic - including
different types of users, devices, mobile operating systems, geographic regions etc.
5. The observed performance lift from Algorithm B will continue at a similar level over longer
periods of testing. I assume effects like algorithm novelty wearing off will not impact relative
performance significantly.
6. External market conditions remain reasonably constant over the testing period and will not

significantly swing performance between A and B one way or another.

You might also like