You are on page 1of 6

Module 1

What is AI?
1. AI, or Artificial Intelligence, simulates human intelligence in machines.
2. It includes Narrow AI for specific tasks and General AI with broader abilities.
3. AI is used in various fields like healthcare, finance, and transportation.
4. Ethical concerns include privacy, bias, and job displacement.
5. Its future impact could be significant, requiring careful management.

Approaches
Acting Humanly: The Turing Test Approach:
- This approach focuses on whether a machine can exhibit behavior indistinguishable from that of a human.
- It evaluates the machine's ability to engage in natural language conversation and perform tasks in a
manner that is perceived as human-like.
Thinking Humanly: The Cognitive Modelling Approach:
- This approach seeks to understand human thought processes through introspection, psychological
experiments, and brain imaging.
- It involves developing computer programs that mimic human cognition, aiming to match their behavior
with that of humans.
Thinking Rationally: The "Laws of Thought" Approach:
- This approach is based on formal logic and aims to codify correct reasoning processes.
- It involves representing knowledge and relationships among objects using logical notation, inspired by
philosophers like Aristotle.
Acting Rationally: The Rational Agent Approach:
- This approach defines rationality as achieving the best outcome or expected outcome, considering
uncertainty.
- It involves designing autonomous agents that perceive their environment, adapt, and pursue goals to
achieve rational behavior.

AI used in
Philosophy:
1. AI follows Aristotle's logic and reasoning principles.
2. It integrates empirical learning like Bacon and Locke's ideas.
3. AI learns from experience, akin to logical positivism.
1
Mathematics:
1. AI uses logic and algorithms inspired by Boole and Euclid.
2. It applies algorithms for problem-solving and optimization.
3. AI employs probability theory for decision-making under uncertainty.
Economics:
1. AI aids in algorithmic trading, aligned with Adam Smith's choices for better outcomes.
2. Decision theory guides AI systems in making optimal choices.
3. AI analytics tools help economists understand human behavior in economic systems.
Neuroscience:
1. AI models artificial neural networks based on neuroscience.
2. It leverages brain research for advancements in machine learning.
3. AI contributes to neuroscience research with brain-computer interfaces.
Psychology:
1. AI mimics human behavior, drawing insights from psychology.
2. It uses psychological principles to design intuitive interfaces.
3. AI supports mental health diagnosis and treatment with assessments and therapies.
Computer Engineering:
1. AI drives advancements in computing hardware.
2. It optimizes hardware for AI applications like specialized chips.
3. Control theory enables self-regulating behavior in robotics and autonomous systems.
Linguistics:
1. AI's natural language processing is influenced by linguistic theories.
2. Computational linguistics develops AI algorithms for tasks like machine translation.
3. Linguistic research enhances AI's understanding of language structure and semantics.

History of AI
1. Ancient Origins: The idea of artificial beings dates back to ancient civilizations.
2. Modern Beginnings: Alan Turing's work in the mid-20th century laid the foundation for modern AI.
3. Dartmouth Conference (1956): The term "artificial intelligence" was coined at this conference, marking
the formal start of AI research.
4. Early Programs: Initial AI programs focused on tasks like game playing and logical reasoning.

2
5. Expert Systems: The development of expert systems in the 1960s and 1970s simulated human expertise
using rules and knowledge bases.
6. AI Winter: Skepticism and funding cuts in the 1970s and 1980s slowed AI progress.

7. Neural Networks: Interest in neural networks, inspired by the brain's structure, resurged in the 1980s.
8. Machine Learning: The 1990s saw the rise of machine learning, allowing computers to learn from data
and improve performance.
9. Deep Learning Revolution: Breakthroughs in deep learning in the early 21st century led to significant
advancements in AI applications.
10. Current Landscape: AI is now integrated into various aspects of daily life, with ongoing research
focused on addressing challenges and realizing future potential.

Problem-solving
Of course! Imagine you have a smart robot tasked with finding its way from point A to point B on a map.
Here's how it would work:
1. Setting Goals: First, the robot needs to decide what it's trying to achieve. In this case, it wants to get to
point B efficiently.
2. Identifying the Problem: The robot looks at the map and figures out which roads to consider and where
it can go from its current location.
3. Searching for Solutions: It uses algorithms to find the best route to reach point B. It explores different
paths and evaluates which one is the most promising.
4. Execution: Once it finds the best route, the robot starts moving. It follows the planned path step by step,
like following directions on a map.
5. Process Overview: This whole process is like a loop - the robot sets a goal, figures out how to achieve it,
and then acts accordingly.
6. Open-Loop System: During the journey, the robot doesn't stop to check its surroundings again. It just
follows the planned route blindly, like driving with a GPS with the screen turned off.
7. Key Components: To make decisions, the robot considers its starting point, the available roads, what
each road leads to, the destination, and how far each path is.
8. State Space: This is like a big network of roads on the map. Each intersection is a state, and the roads
between them are actions the robot can take.
9. Goal Achievement: The robot knows it reached the goal when it arrives at point B, or if it meets specific
criteria, like finding a specific landmark.
10. Optimal Solutions: The best route is the one that gets the robot to point B in the shortest distance or
time, depending on what matters most.

Toy problem
Vacuum World Problem:
3
1. States defined by agent and dirt locations, yielding 8 possible world states.
2. Initial state can be any configuration.
3. Actions include Left, Right, and Suck.
4. Transition model describes expected effects of actions.
5. Goal test checks if all squares are clean.
6. Each step costs 1, making path cost the number of steps.
8-Puzzle:
1. States defined by tile and blank space positions on a 3x3 board.
2. Any state can be the initial state.
3. Actions are movements of the blank space: Left, Right, Up, Down.
4. Transition model returns resulting state after an action.
5. Goal test checks if the state matches a specified configuration.
6. Each step costs 1, determining the path cost.

Explanation:
These summaries outline the key elements of each problem, including their states, initial configurations,
possible actions, transition models, goal tests, and path costs. These toy problems serve as testbeds for
developing and evaluating search algorithms in artificial intelligence.

Searching for solution


1. Problem Formulation:
- Clearly define the problem with initial states, actions, transition models, goal tests, and path costs.
2. Building the Search Tree:
- Start with the initial state and explore possible actions, forming a tree structure.
3. Exploration:
- Evaluate states for goal attainment and expand to explore new states based on chosen actions.
4. Frontier and Expansion:
- The frontier represents states available for further exploration, expanding until a solution is found or all
options are exhausted.
5. Search Strategies:
- Strategies vary in selecting states for exploration, influencing search efficiency.
6. Avoiding Redundancy:
- Redundant paths and states should be avoided to prevent inefficiency in search.

4
7. Graph Search:
- Utilizes an explored set to remember visited states and avoid revisiting them.
8. Benefits of Graph Search:
- Constructs a search tree efficiently, systematically exploring states until a solution is reached.

Infrastructure for search algorithms


1. Node Structure: Each node in the search tree contains state, parent, action, and path cost attributes,
facilitating navigation and solution extraction.
2. Child Node Generation: Child nodes are produced by applying actions to parent nodes, expanding the
search space iteratively.
3. Data Structures: The search tree is managed using parent pointers, while the frontier, holding nodes for
potential expansion, is typically implemented as a queue, with variations like FIFO and priority queues
available.
4. Explored Set: To prevent revisiting explored states, an explored set, usually a hash table, efficiently
stores previously encountered states.
5. Node vs. State: Nodes represent specific paths in the search tree, linked by parent pointers, while states
denote unique configurations of the problem space, ensuring proper management of node and state
representations during the search process.
This infrastructure serves as the foundation for various search algorithms, enabling systematic exploration of
problem spaces to find solutions effectively.

Measuring problem-solving performance

1. Completeness: Determines whether the algorithm is guaranteed to find a solution if one exists. A
complete algorithm will always find a solution if it exists within the search space.
2. Optimality: Evaluates whether the strategy finds the best possible solution according to some predefined
criterion. An optimal algorithm will find the most desirable solution among all possible solutions.
3. Time Complexity: Measures the computational time required by the algorithm to find a solution. It is
often quantified by the number of nodes generated during the search process.
4. Space Complexity: Assesses the amount of memory needed to execute the search algorithm. It is usually
expressed in terms of the maximum number of nodes stored in memory during the search.
5. Cost Evaluation: Considers the trade-offs between time and space complexities, as well as the path cost
of the solution found. It involves balancing the computational resources utilized by the algorithm with the
quality of the solution obtained.

Uniformed search
1. Uninformed Search: These strategies lack extra information beyond the problem definition, blindly
exploring successors to distinguish goal states.

5
2. No Additional Knowledge: Uninformed methods rely solely on the problem's description, lacking
insights into state characteristics or optimal paths.
3. Node Expansion Priority: They vary in how they prioritize expanding nodes, impacting search
efficiency and completeness.
4. Completeness and Optimality: While ensuring solution discovery (completeness), they may not always
find the most optimal solution.
5. Contrast with Informed Search: Informed methods use domain knowledge or heuristics, potentially
leading to more efficient searches.

Sums on
Breadth First search

Depth First Search

You might also like