You are on page 1of 11

The Integration of Quitting

Sivan Jadhav
Passion Project, 2023-24, G9
Abstract

This paper delves into the concepts that direct when to quit
with the help of mathematics, a concept that explores the
rational things behind decision-making. In our daily lives, a
substantial majority of decisions (approximately 95%) are
driven by irrational or emotional impulses. Harnessing the
power of mathematics, we aim to elevate the rationality
quotient in decision-making processes. However, the
complexity lies in the recognition that not all options are
visible, necessitating the crucial aspect of knowing when to
quit. The central theme of this research revolves around
finding the near-perfect moment to cease exploration,
transition from uncertainty, and ultimately arrive at a
well-informed decision.
Acknowledgements

Thank you for giving such a good platform like Acadru. Next time I promise to
perform better.

Sivan Jadhav
17th, Jan 2024
CHAPTER 1

Introduction

“Be prepared to immediately commit to the

first thing you see that’s better than what

you saw in that first 37%”

Brian Christian

P icture yourself in the process of buying a house. You might find yourself
pondering whether to make an offer on a house you like now or hold off in
case a better option surface later. Similar dilemmas arise when seeking a job;
deciding whether to accept a current job offer or continue searching for more
opportunities can be equally perplexing.
In the realm of mathematics, the challenge of determining the optimal moment to
take a specific action, with the goal of maximizing rewards or minimizing
costs, is encapsulated by the concept of "Optimal Stopping" or early stopping.
Simply put, when faced with a multitude of choices and the need to select the
best one, mathematics can guide us in identifying the ideal time to either
proceed or quit. Experts suggest that to enhance your chances of achieving the
best outcomes, it's advisable to evaluate the initial 37% of options.
Summary of Up and Atom video:
Consider the scenario of house hunting in Mumbai, a city known as the "City of
Dreams." The decision-making process involves the risk of either acting too
quickly and potentially missing out on a better option or waiting too long and
losing the best opportunity. Striking the right balance is crucial, and
researchers have coined the term "Look-Then-Leap Rule," advocating for
assessing the first 37% of options before making a decision.
The derivation of this percentage stems from understanding the dynamics of
decision-making as the number of choices increases. The concept is scalable;
whether you're evaluating 10 houses or 10,000 houses, following the
Look-Then-Leap Rule ensures a 37% chance of landing the best option. This
mathematical principle, known as "Optimal Stopping," finds application in
various scenarios, including job searches, gambling, and, surprisingly, matters
of the heart.
Now, let's delve into the realm of emotions, love—no, I'm not upset; my writing
applying it to myself for my love life (future) 💔
might take a U-turn here, akin to encountering an answer of 0/0, when
. Returning to the topic, to
avoid the mathematical ambiguity, let's consider another example (without the
0
0
answer)—Michael Trick, a young undergraduate student and now a
professor at Carnegie Mellon University. Embracing the Look-and-Leap rule,
the 37% principle, in matters of the heart, he faced the uncertainty of not
knowing how many people he'd need to date.
The flexibility of the 37% rule extends beyond numbers; it applies to time as well.
Michael decided to navigate the realm of romance from age 18 to 40,
acknowledging the unpredictability of his future. Planning to quit or take a
leap at the age of 26.1, he found himself coincidentally at that precise age.
Firmly believing in the enchantment of mathematical concepts, he resolved to
was heartbreak 💔
propose to a girl better than any he had encountered. However, the outcome
, a stark contrast to house-hunting where rejection by a
house is not a concern.
💔
In love, rejection is a possibility , a reality that mathematicians have considered.
If faced with a 50/50 chance of rejection (hopefully not), one could follow a
strategic approach. Consider proposing or taking a leap after a quarter (1/4) of
your search. If rejected, persist in proposing until acceptance, boosting your
chances of finding the best partner to 25%. Notably, this approach diverges
from the house dilemma, as it allows room for revisiting and adjusting one's
strategy. Take, for instance, the story of the astronomer who, after his wife's
passing, dated 11 women and eventually married the fifth.

CHAPTER 2

Probability

“Medicine is a science of uncertainty and an art of


probability”
William Osler

“O ptimal Stopping" stands as a practical application of "Probability" within


the realm of mathematics. The process of decision-making involves analyzing
situations, facts, and potential outcomes, where Probability theory asserts that every
decision presents a range of potential outcomes.
Did you know that the roots of probability trace back to games of chance and
gambling? Its inception can be attributed to a dispute in 1654 between two gamblers
over the division of a stake when their interrupted game left them without a clear
winner. This conundrum was posed to mathematicians, including Blaise Pascal and
Pierre de Fermat, initiating a correspondence that laid the groundwork for modern
probability theory, eventually formalized by Andrey Kolmogorov.
From a mathematical perspective, games of chance are akin to experiments yielding
various outcomes. Mathematicians scrutinize these outcomes, leveraging observed
information to predict unknown probabilities in subsequent events. Notably, the
predictive methods, often rooted in combinatorial principles, resonate with the
techniques essential for addressing problems of "Optimal Stopping."
Probability theory has yielded fascinating observations. For instance, the likelihood
of winning the full prize in the Super 5 lottery is approximately nine times greater
than the probability of perishing in a plane crash. Similarly, the probability of a car
accident is about 24 times more probable than a fatal slip in a bath or shower.
The origins of probability can be traced to the 17th century when the Chevalier de
Mere presented a gambling problem to mathematicians, setting the stage for the
development of modern probability concepts. Early contributions by Christiaan
Huygens and Pierre-Simon Laplace paved the way for systematic treatises on
probability, culminating in Kolmogorov's foundational work in 1933. Kolmogorov's
measure-theoretic approach, combining sample space and measure theory, not only
provided a rigorous axiomatic basis but also integrated probability theory into
mainstream mathematics.
From an international perspective, advancements in statistical models, such as
Generalized Linear Models (GLM), Generalized Estimating Equations (GEE), and
Multilevel models, have offered solutions for addressing dependencies and
correlations in data, catering to diverse research fields. Locally, Liberato Camilleri,
the author of this page, has applied these statistical models across various domains,
including social well-being, education, insurance, and mortality.
Intriguingly, the probabilities extend beyond mathematical formulations into
everyday scenarios, revealing fascinating odds. For instance, the likelihood of being
struck by lightning during one's lifetime surpasses the probability of a shark attack by
22 times. The probability of being involved in a car accident is 24 times more likely
than a fatal slip in a bath or shower.
As we explore the world of probability, its origins, and diverse applications, we gain
insights not only into mathematical theories but also into the fascinating probabilities
that shape our daily experiences.

CHAPTER 3

Statistics

“Statistics are used much like

a drunk uses a lamppost: for

support, not illumination”

“O ptimal Stopping" problems permeate various fields, including

statistics, economics, computers, and mathematical finance. Their applications extend


to everyday scenarios such as choosing public toilets, selecting the best secretary
(known as the “Secretary Problem”), securing the best deal for selling a house (the
“House Selling” problem), or even finding the optimal parking space (the “Parking
problem”).
All these problems share a common framework, and we can explore this framework
through the example of "Choosing a toilet," as demonstrated in a video by
Numberphile. The video, featuring Dr. Ria Symonds, explains the mathematical
approach to finding the cleanest toilet at a music festival. The mathematical problem
involves deciding the optimal point to stop examining toilets and choosing the best
one, considering constraints like not being able to return to a previously rejected
toilet.
The video outlines a scenario with three toilets ranked by their hygiene levels (1
being the cleanest, 3 being the dirtiest). The challenge is to determine the optimal
strategy for maximizing the chances of selecting the cleanest toilet. The analysis
shows that, by rejecting the first toilet and choosing the next one that is better than
the previously seen toilets, there is a 50% chance of selecting the cleanest toilet.
However, as the number of toilets increases, the percentage of success decreases. For
example, with four toilets, the success rate drops to approximately 46%. The video
proposes a mathematical solution to enhance the likelihood of finding the cleanest
toilet, suggesting that one should inspect the first 37% of toilets and then choose the
first one that surpasses the quality of the previously seen toilets.
This approach, while still requiring an examination of the initial 37%, improves the
chances of selecting the best option. The video emphasizes that this mathematical
strategy is applicable not only to choosing toilets but also to other scenarios like
hiring the best secretary or finding an ideal life partner. The underlying principle is
known as the "Look-and-Leap Rule."
In essence, the mathematical foundation of statistics plays a crucial role in solving
these real-world problems, providing insights into decision-making strategies that
balance information, uncertainty, and the desire to optimize outcomes.
Knowing when to stop – American Scientist Article Summary:
In the American Scientist article "Knowing When to Stop" by Theodore Hill,
the author delves into the intricate world of optimal stopping, exploring the
mathematics behind decision-making in various scenarios. From Microsoft
determining the launch of Word 2020 to a governor deciding when to
evacuate in the face of a hurricane, every decision involves risk and timing.
Hill traces the historical roots of probability theory, highlighting key
contributions by figures like Girolamo Cardano, Blaise Pascal, and Pierre de
Fermat. The article notes the evolution of optimal-stopping problems from
gambling origins, with significant developments during World War II and
later in finance, exemplified by the Black-Scholes formula.
The author introduces the marriage problem as a classic optimal-stopping
scenario, revealing a strategy to select the best life partner with a probability
exceeding one-third. Further, the article explores optimal strategies for
scenarios involving selecting one of the best k candidates, illustrated with
examples from the Olympics or horse racing.
A surprising claim emerges in the discussion of a two-card problem: the
possibility of winning more than half the time by employing a seemingly
straightforward strategy based on Gaussian distribution. The article also
explores optimal stopping in scenarios with complete information, utilizing
backward induction to determine strategies for maximizing rewards, such as
stopping in a dice-rolling game.
The complexity heightens when faced with partial information scenarios,
where tools from zero-sum, two-person games are employed. The article
describes solutions for the marriage problem under conditions where only a
bound on the number of applicants is known, offering optimal strategies even
in cases where the maximum number of cards is uncertain.
While many stopping problems have been resolved, the article acknowledges
unsolved puzzles, including the optimal strategy for a coin-tossing scenario.
The article concludes by emphasizing the ongoing development of optimal
stopping theory, its rapid pace, and its applications in fields like finance,
cautioning against blind trust in computer models and advocating for
continued exploration in this critical domain.
When to stop being greedy and just park | Optimal stopping and
dynamic programming – Video by OptWhiz:

Imagine it's your friend's birthday, and he invited you over to his house for
his birthday party. You live kind of far away, so you get in your car and start
driving over. As you get near your friend's house, you notice some parking
spots. The spot closest to you is empty, marked by a green circle, so you
could park there. However, you don't know if the other spots are empty or
taken, so they're marked with a question mark.
Now, you face a tough decision – do you grab this empty spot and walk the
rest of the way, or do you keep driving, hoping for an even closer spot? You
decide it's a bit too far, so you take a gamble and drive on. Once you get to
the next spot, you see that it's taken. Unfortunately, this road is one-way only,
so you can't turn around and go back. You have no choice but to keep driving
forward. The next few spots are also full, and you start regretting not taking
that first spot.
But wait, a moment – this spot is open! Should you park here? Well, feeling a
bit lazy, you decide to press your luck and hold out for an even closer spot.
Sadly, all the best parking spots are taken. This is an epic party, and a lot of
people have shown up. Many have faced parking dilemmas like this, although
maybe not as extreme.
But can math help us determine whether to park when you see an empty spot
or to keep driving? In this case, you ended up parking far away from your
friend's house, and you were very unlucky. You also didn't really have a
strategy. Is there a mathematically optimal strategy for parking?
Let's make this problem more concrete. Imagine a number line representing
the road, where your friend's house is at 100, and parking starts at 80. Parking
spots are available at integer values along this line. Your goal is to minimize
the distance between wherever you end up parking and your friend's house. If
you park at 86, the distance is 14.
The difficulty arises because you don't know if a parking spot is empty until
you get there. Only when you drive to 87 will you know if the spot is taken.
You also can't drive backward, meaning if you pass an open spot, you can't go
back to it. To solve this problem, you need to know the probability that a
parking spot is taken. This probability gives the chances that the next parking
spot is full. If this probability is low, like 25%, then many spots will be open,
and your strategy should turn down open spots more frequently. If it's high,
like 90%, then most spots will be taken, and your strategy should more
heavily consider parking when you see an open spot.
For this problem, let's assume a high probability of 90%. Can you find the
optimal parking strategy?
If you'd like, you can pause here and try to solve it yourself. If not, keep
watching for the solution.
Time to solve this. Intuitively, you should expect to turn down open parking
spots more often when you're far away compared to when you're close. Let's
see why this is mathematically.
When you're at an open spot, you need to weigh the benefits of parking
against not parking. For example, if you're at spot 96 and decide to park, the
distance will be 4. But what distance will you get if you don't park? This is
trickier to determine.
First, it depends on your strategy after turning down the parking spot.
Reasonable strategies include "park at the next open spot" or "reject all spots
until 99 and then accept the next open spot." There are also less sensible
strategies like "park at the next open number that's divisible by 13." For this
problem, we only care about the distance from an optimal strategy.
This distance is random since it depends on which spots are taken, which
varies each time. Therefore, we calculate the average of these distances over
many trials and compare it to 4. This average is called the "expected value" of
the distance.
To determine the optimal strategy, we use backward induction, starting at spot
100. If spot 100 is open, it's optimal to stop there. If it's taken, you have to
keep driving until you find the next open spot. No matter where it is, you
want to park there, as driving further away is undesirable. So, if your car is at
100 or beyond, the optimal strategy is to park at the very next open spot you
see.
Now, if spot 100 is open, the expected distance under the optimal policy is 9.
We calculate this by considering the 10% chance it's empty and you park
(distance 0), and the 90% chance it's taken, and you need, on average, 10
more spots before finding an open one. Therefore, the expected distance is 9.
Moving to spot 99, if it's open, the distance is 1. If it's taken, we move to spot
100, and the expected distance is 9. Since 1 is less than 9, we should always
park at 99 if it's open. A similar calculation applies to spot 98, where the
expected distance is 8.2, so we should always park there. This pattern
continues, and for spots 94 onwards, we should always park if the spot is
open. The expected distance at 94, before knowing if it's open, is 6.57.
Things get interesting at spot 93. If it's empty and we park, the distance is 7.
If we continue, the expected distance is 6.57, so we should turn down this
open spot and continue. This pattern holds for all spots before 93. The
optimal policy is to reject all spots before 94 and then stop at the first open
spot afterward. The expected distance of this policy is 6.57, which is the
expected distance if we start at 94 and accept the first spot after.
This technique, known as backward induction, is a form of dynamic
programming. Dynamic programming is a powerful technique used in various
fields. The problem falls into the category of optimal stopping, where the
goal is to determine the best time to stop. Backward induction helps find the
optimal strategy at each step, ultimately leading to the best overall decision.

Keep in mind that these are mathematical calculations and predictions about
potential outcomes, and they may not be 100% accurate or applicable to your
specific situation.
Mathematically, stopping rule problems are typically categorized into two
types: the discrete-time case and the continuous-time case.
In the discrete-time case, specific samples are given, and the "reward"
(consequence of stopping the action at that sample) for each sample is
known. Observing the sequence in which these samples appear, we have the
option to either stop or continue at each step. The goal is to choose a
"stopping rule" that maximizes the reward by stopping at the sample
associated with the maximum benefit.
On the other hand, in the continuous-time case, specific cases or values are
provided for particular time intervals. In this scenario, the objective is to
choose a "stopping time" that maximizes the overall reward. For those
familiar with probability and statistics, the difference between these two
problems is explained on this Wikipedia page, utilizing concepts of random
variables and probability spaces.
Despite its numerous applications, the "Optimal Stopping" problem
encounters challenges in some simple experiments. One well-known example
is the "Tossing a fair coin" experiment. In this scenario, getting heads earns a
point, while getting tails results in a negative point. The challenge lies in
determining how many times one should toss the coin to achieve the
maximum number of points. Additionally, there is no guarantee that tossing
the coin a specific number of times will consistently yield the same result.
For instance, tossing the coin 20 times might result in 17 points during one
experiment, but only 6 points in the next. This variability poses the question
of how to choose the best time to stop.
Reflect on these challenges as you delve into the mini-project section, where
one of the projects involves conducting your own coin toss experiment,
collecting data, and understanding why this experiment may present
difficulties in the context of the "Optimal Stopping" problem.

You might also like