You are on page 1of 4

1. (16 points) Term Definition (4 points for each question).

1. Game tree

A game tree in artificial intelligence and game theory is like a roadmap for a
game. It shows all the possible moves and what can happen in a game,
especially in two-player games. Here's how it works:

1. **Nodes:** Think of nodes as checkpoints in the game. They show


different points in the game where players have to make decisions or moves.
It all starts at the first node, which is where the game begins.

2. **Edges:** These are like the roads connecting the checkpoints. They
represent the different choices or moves players can make. Each edge shows
what action a player can take.

3. **Terminal Nodes:** These are the final checkpoints, the end of the game.
They tell us who won, who lost, or if it's a tie. They're like the finish line.

4. **Branching Factor:** This is a fancy way of saying how many choices you
have at each checkpoint. If a node has a high branching factor, it means there
are lots of options at that point in the game.

People use game trees to figure out the best moves in games like chess,
checkers, and tic-tac-toe. They use special AI algorithms to explore the tree
and find the smartest strategies. Sometimes, game trees can get really big
and complicated, especially in complex games, and that's where AI comes in
to help us make the best decisions. It's like having a GPS for playing games.

2. Most Constrained Variable

The "most constrained variable" in artificial intelligence and constraint


satisfaction problems (CSPs) is the variable that has the fewest options left to
choose from. When you're solving a CSP, you have a bunch of variables with
specific rules about what values they can have. The "most constrained variable"
is the one with the fewest possible values it can take.

This concept is used to decide which variable to work on next when trying to
solve a problem. By focusing on the variable with the fewest options, you can
often find a solution more efficiently. It's like picking the puzzle piece that only
fits in one spot, which helps you solve the puzzle faster. This strategy can save
time and make it easier to figure out if there's a problem with the puzzle
(incompatibility) early on.

3. Backtracking search
Backtracking search in artificial intelligence is like solving a puzzle by trying
different pieces until you find the right one. Here's how it works:

1. **Select a Piece:** You start by picking a puzzle piece (or variable) to work
on. It's a part of the problem you're trying to solve, like finding where a piece
fits in a jigsaw puzzle.

2. **Try It Out:** You then try fitting the piece into the puzzle (assigning a
value to the variable). You use clues or strategies to decide which piece to try
first.

3. **Check the Rules:** After placing the piece, you check if it follows the
rules of the puzzle. If it fits and doesn't break any rules, you move on to the
next piece. If it doesn't fit or breaks a rule, you backtrack.

4. **Backtrack:** Backtracking means taking the piece out and trying a


different one. You go back to the previous piece you placed and try a
different piece there.

5. **Repeat:** You keep doing this, trying pieces, checking the rules, and
backtracking when needed until you complete the puzzle (find a solution) or
realize it's impossible.

Backtracking is really handy for solving puzzles and problems with lots of
choices. It helps you explore different options and find the best way to solve
the problem. It's like being a detective, trying out clues until you solve the
mystery.

4. Admissible heuristics

An admissible heuristic in artificial intelligence is like a smart guess that helps


a computer find the best way to solve a problem.

- **Heuristic Function:** Think of this as a hint. It's a way for the computer to
guess how close it is to solving the problem.

- **Admissible:** This is a very careful guess. It's smart because it never


guesses that the problem is harder than it really is. It's always on the "easy"
side.

- **Optimistic Estimate:** Being optimistic here is like saying, "I think this
problem is not as tough as it might seem." The computer is hopeful but
realistic.

Admissible heuristics are handy in guiding search algorithms, especially in


finding the best routes or solutions. They make sure the computer doesn't
waste time looking for answers in the wrong places. It's like having a good
map that always underestimates how far you have to travel, so you're
pleasantly surprised when you get there sooner.

2. (39 points) Short Answer Questions.

2.1 [12 points]

Say we define an evaluation function for a heuristic search problem as


f(n) = (w ∗ g(n)) + ((1 – w) ∗ h(n))
where g(n) is the cost of the best path found from the start state to state n, h(n) is an admissible
heuristic function that estimates the cost of a path from n to a goal state, and 0.0 ≤ w ≤ 1.0. What
search algorithm do you get when:
1. w = 0.0

2. w = 0.5

3. w = 1.0

The given evaluation function for a heuristic search problem is a weighted sum of two
components: `g(n)` and `h(n)`, where `g(n)` is the cost of the best path found from the start
state to state `n`, `h(n)` is an admissible heuristic function, and `w` is a weight factor that
ranges from 0.0 to 1.0. The weight `w` balances the importance between these two
components. Different values of `w` result in different search algorithms:

1. **When `w = 0.0`:** In this case, the evaluation function becomes `f(n) = 0 * g(n) + (1 - 0)
* h(n) = h(n)`. This means that the cost of the best path found from the start state (`g(n)`) is
completely ignored, and the search algorithm relies solely on the heuristic function `h(n)`.
When `w = 0.0`, you essentially get the A* search algorithm, which is informed and uses only
the heuristic estimate to guide the search.

2. **When `w = 0.5`:** With this setting, the evaluation function becomes `f(n) = 0.5 * g(n) +
(1 - 0.5) * h(n) = 0.5 * g(n) + 0.5 * h(n)`. This is a weighted combination of both the actual
cost to reach the current state `n` from the start (`g(n)`) and the heuristic estimate of the
remaining cost to the goal (`h(n)`). When `w = 0.5`, you get a search algorithm that balances
between the actual path cost and the heuristic estimate. This approach is useful when you
want a compromise between the completeness of informed search (like A*) and the
efficiency of uninformed search (like Uniform Cost Search).

3. **When `w = 1.0`:** In this scenario, the evaluation function becomes `f(n) = 1.0 * g(n) +
(1 - 1.0) * h(n) = g(n)`. Here, the heuristic estimate (`h(n)`) is completely disregarded, and the
algorithm relies solely on the actual path cost from the start state to the current state
(`g(n)`). This is equivalent to Uniform Cost Search, an uninformed search algorithm that
always selects the lowest-cost path.

In summary, the choice of the weight `w` in the evaluation function allows you to adjust the
behavior of the search algorithm, ranging from informed search (A* with `w = 0.0`) to a
balanced approach (`w = 0.5`) to uninformed search (Uniform Cost Search with `w = 1.0`).

2.2 (12 points)

Consider the following search tree produced after expanding nodes A and B, where each arc is
labeled with the cost of the corresponding operator, and the leaves are labeled with the value of a
heuristic function, h. For uninformed searches, assume children are expanded left to right. In case of
ties, expand in alphabetical order.

Which one node will be expanded next by each of the following search methods?
1. Depth-First search -
2. Greedy Best-First search -
3. Uniform-Cost search
4. A* search

You might also like