Professional Documents
Culture Documents
Sivan Jadhav
Passion Project, 2023-24, G9
Abstract
This paper delves into the concepts that direct when to quit
with the help of mathematics, a concept that explores the
rational things behind decision-making. In our daily lives, a
substantial majority of decisions (approximately 95%) are
driven by irrational or emotional impulses. Harnessing the
power of mathematics, we aim to elevate the rationality
quotient in decision-making processes. However, the
complexity lies in the recognition that not all options are
visible, necessitating the crucial aspect of knowing when to
quit. The central theme of this research revolves around
finding the near-perfect moment to cease exploration,
transition from uncertainty, and ultimately arrive at a
well-informed decision.
Acknowledgements
Thank you for giving such a good platform like Acadru. Next time I promise to
perform better.
Sivan Jadhav
17th, Jan 2024
CHAPTER 1
Introduction
Brian Christian
P icture yourself in the process of buying a house. You might find yourself
pondering whether to make an offer on a house you like now or hold off in
case a better option surface later. Similar dilemmas arise when seeking a job;
deciding whether to accept a current job offer or continue searching for more
opportunities can be equally perplexing.
In the realm of mathematics, the challenge of determining the optimal moment to
take a specific action, with the goal of maximizing rewards or minimizing
costs, is encapsulated by the concept of "Optimal Stopping" or early stopping.
Simply put, when faced with a multitude of choices and the need to select the
best one, mathematics can guide us in identifying the ideal time to either
proceed or quit. Experts suggest that to enhance your chances of achieving the
best outcomes, it's advisable to evaluate the initial 37% of options.
Summary of Up and Atom video:
Consider the scenario of house hunting in Mumbai, a city known as the "City of
Dreams." The decision-making process involves the risk of either acting too
quickly and potentially missing out on a better option or waiting too long and
losing the best opportunity. Striking the right balance is crucial, and
researchers have coined the term "Look-Then-Leap Rule," advocating for
assessing the first 37% of options before making a decision.
The derivation of this percentage stems from understanding the dynamics of
decision-making as the number of choices increases. The concept is scalable;
whether you're evaluating 10 houses or 10,000 houses, following the
Look-Then-Leap Rule ensures a 37% chance of landing the best option. This
mathematical principle, known as "Optimal Stopping," finds application in
various scenarios, including job searches, gambling, and, surprisingly, matters
of the heart.
Now, let's delve into the realm of emotions, love—no, I'm not upset; my writing
applying it to myself for my love life (future) 💔
might take a U-turn here, akin to encountering an answer of 0/0, when
. Returning to the topic, to
avoid the mathematical ambiguity, let's consider another example (without the
0
0
answer)—Michael Trick, a young undergraduate student and now a
professor at Carnegie Mellon University. Embracing the Look-and-Leap rule,
the 37% principle, in matters of the heart, he faced the uncertainty of not
knowing how many people he'd need to date.
The flexibility of the 37% rule extends beyond numbers; it applies to time as well.
Michael decided to navigate the realm of romance from age 18 to 40,
acknowledging the unpredictability of his future. Planning to quit or take a
leap at the age of 26.1, he found himself coincidentally at that precise age.
Firmly believing in the enchantment of mathematical concepts, he resolved to
was heartbreak 💔
propose to a girl better than any he had encountered. However, the outcome
, a stark contrast to house-hunting where rejection by a
house is not a concern.
💔
In love, rejection is a possibility , a reality that mathematicians have considered.
If faced with a 50/50 chance of rejection (hopefully not), one could follow a
strategic approach. Consider proposing or taking a leap after a quarter (1/4) of
your search. If rejected, persist in proposing until acceptance, boosting your
chances of finding the best partner to 25%. Notably, this approach diverges
from the house dilemma, as it allows room for revisiting and adjusting one's
strategy. Take, for instance, the story of the astronomer who, after his wife's
passing, dated 11 women and eventually married the fifth.
CHAPTER 2
Probability
CHAPTER 3
Statistics
Imagine it's your friend's birthday, and he invited you over to his house for
his birthday party. You live kind of far away, so you get in your car and start
driving over. As you get near your friend's house, you notice some parking
spots. The spot closest to you is empty, marked by a green circle, so you
could park there. However, you don't know if the other spots are empty or
taken, so they're marked with a question mark.
Now, you face a tough decision – do you grab this empty spot and walk the
rest of the way, or do you keep driving, hoping for an even closer spot? You
decide it's a bit too far, so you take a gamble and drive on. Once you get to
the next spot, you see that it's taken. Unfortunately, this road is one-way only,
so you can't turn around and go back. You have no choice but to keep driving
forward. The next few spots are also full, and you start regretting not taking
that first spot.
But wait, a moment – this spot is open! Should you park here? Well, feeling a
bit lazy, you decide to press your luck and hold out for an even closer spot.
Sadly, all the best parking spots are taken. This is an epic party, and a lot of
people have shown up. Many have faced parking dilemmas like this, although
maybe not as extreme.
But can math help us determine whether to park when you see an empty spot
or to keep driving? In this case, you ended up parking far away from your
friend's house, and you were very unlucky. You also didn't really have a
strategy. Is there a mathematically optimal strategy for parking?
Let's make this problem more concrete. Imagine a number line representing
the road, where your friend's house is at 100, and parking starts at 80. Parking
spots are available at integer values along this line. Your goal is to minimize
the distance between wherever you end up parking and your friend's house. If
you park at 86, the distance is 14.
The difficulty arises because you don't know if a parking spot is empty until
you get there. Only when you drive to 87 will you know if the spot is taken.
You also can't drive backward, meaning if you pass an open spot, you can't go
back to it. To solve this problem, you need to know the probability that a
parking spot is taken. This probability gives the chances that the next parking
spot is full. If this probability is low, like 25%, then many spots will be open,
and your strategy should turn down open spots more frequently. If it's high,
like 90%, then most spots will be taken, and your strategy should more
heavily consider parking when you see an open spot.
For this problem, let's assume a high probability of 90%. Can you find the
optimal parking strategy?
If you'd like, you can pause here and try to solve it yourself. If not, keep
watching for the solution.
Time to solve this. Intuitively, you should expect to turn down open parking
spots more often when you're far away compared to when you're close. Let's
see why this is mathematically.
When you're at an open spot, you need to weigh the benefits of parking
against not parking. For example, if you're at spot 96 and decide to park, the
distance will be 4. But what distance will you get if you don't park? This is
trickier to determine.
First, it depends on your strategy after turning down the parking spot.
Reasonable strategies include "park at the next open spot" or "reject all spots
until 99 and then accept the next open spot." There are also less sensible
strategies like "park at the next open number that's divisible by 13." For this
problem, we only care about the distance from an optimal strategy.
This distance is random since it depends on which spots are taken, which
varies each time. Therefore, we calculate the average of these distances over
many trials and compare it to 4. This average is called the "expected value" of
the distance.
To determine the optimal strategy, we use backward induction, starting at spot
100. If spot 100 is open, it's optimal to stop there. If it's taken, you have to
keep driving until you find the next open spot. No matter where it is, you
want to park there, as driving further away is undesirable. So, if your car is at
100 or beyond, the optimal strategy is to park at the very next open spot you
see.
Now, if spot 100 is open, the expected distance under the optimal policy is 9.
We calculate this by considering the 10% chance it's empty and you park
(distance 0), and the 90% chance it's taken, and you need, on average, 10
more spots before finding an open one. Therefore, the expected distance is 9.
Moving to spot 99, if it's open, the distance is 1. If it's taken, we move to spot
100, and the expected distance is 9. Since 1 is less than 9, we should always
park at 99 if it's open. A similar calculation applies to spot 98, where the
expected distance is 8.2, so we should always park there. This pattern
continues, and for spots 94 onwards, we should always park if the spot is
open. The expected distance at 94, before knowing if it's open, is 6.57.
Things get interesting at spot 93. If it's empty and we park, the distance is 7.
If we continue, the expected distance is 6.57, so we should turn down this
open spot and continue. This pattern holds for all spots before 93. The
optimal policy is to reject all spots before 94 and then stop at the first open
spot afterward. The expected distance of this policy is 6.57, which is the
expected distance if we start at 94 and accept the first spot after.
This technique, known as backward induction, is a form of dynamic
programming. Dynamic programming is a powerful technique used in various
fields. The problem falls into the category of optimal stopping, where the
goal is to determine the best time to stop. Backward induction helps find the
optimal strategy at each step, ultimately leading to the best overall decision.
Keep in mind that these are mathematical calculations and predictions about
potential outcomes, and they may not be 100% accurate or applicable to your
specific situation.
Mathematically, stopping rule problems are typically categorized into two
types: the discrete-time case and the continuous-time case.
In the discrete-time case, specific samples are given, and the "reward"
(consequence of stopping the action at that sample) for each sample is
known. Observing the sequence in which these samples appear, we have the
option to either stop or continue at each step. The goal is to choose a
"stopping rule" that maximizes the reward by stopping at the sample
associated with the maximum benefit.
On the other hand, in the continuous-time case, specific cases or values are
provided for particular time intervals. In this scenario, the objective is to
choose a "stopping time" that maximizes the overall reward. For those
familiar with probability and statistics, the difference between these two
problems is explained on this Wikipedia page, utilizing concepts of random
variables and probability spaces.
Despite its numerous applications, the "Optimal Stopping" problem
encounters challenges in some simple experiments. One well-known example
is the "Tossing a fair coin" experiment. In this scenario, getting heads earns a
point, while getting tails results in a negative point. The challenge lies in
determining how many times one should toss the coin to achieve the
maximum number of points. Additionally, there is no guarantee that tossing
the coin a specific number of times will consistently yield the same result.
For instance, tossing the coin 20 times might result in 17 points during one
experiment, but only 6 points in the next. This variability poses the question
of how to choose the best time to stop.
Reflect on these challenges as you delve into the mini-project section, where
one of the projects involves conducting your own coin toss experiment,
collecting data, and understanding why this experiment may present
difficulties in the context of the "Optimal Stopping" problem.