Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture 4: Heuristic Search in CS 538: Artificial Intelligence - Fall 2007 - Prof. Derek H, Study notes of Computer Science

A lecture slide from cs 538: artificial intelligence course at texas a&m university - commerce, fall 2007. The lecture covers heuristic search, including a* search, heuristic design, and local search. The slides also discuss the properties of a* and the importance of admissible heuristics.

Typology: Study notes

Pre 2010

Uploaded on 08/19/2009

koofers-user-75o
koofers-user-75o 🇺🇸

10 documents

1 / 53

Related documents


Partial preview of the text

Download Lecture 4: Heuristic Search in CS 538: Artificial Intelligence - Fall 2007 - Prof. Derek H and more Study notes Computer Science in PDF only on Docsity! CS 538: Artificial Intelligence Fall 2007 Lecture 4: Heuristic Search Derek Harter – Texas A&M University - Commerce Many slides over the course adapted from Srini Narayanan, Dan Klein, Stuart Russell and Andrew Moore Announcements  Uniform Cost: Problems  Remember: explores increasing cost contours  The good: UCS is complete and optimal!  The bad:  Explores options in every “direction”  No information about goal location Start Goal … c ≤ 3 c ≤ 2 c ≤ 1 Best-First / Greedy Search [J Vaslui Pitesti 98 OJ Hirsova 86 [J Mehadia Urziceni 75 Dobreta [J] MI Craiova Eforie [J Giurgiu Straight-line distance to Bucharest Arad Bucharest Craiova Dobreta Eforie Fagaras Giurgiu Hirsova Tasi Lugoj Mehadia Neamt Oradea Pitesti Rimnicu Vilcea Sibiu Timisoara Urziceni Vaslui Zerind 366 0 160 242 161 178 7 151 226 244 241 234 380 98 193 233 329 80 199 374 Best-First / Greedy Search  Expand the node that seems closest…  What can go wrong? Best First Greedy Search SpaceTimeOptimalCompleteAlgorithm Greedy Best-First Search  What do we need to do to make it complete?  Can we make it optimal? Y* N O(bm) O(bm) … b m Combining UCS and Greedy  Uniform-cost orders by path cost, or backward cost g(n)  Best-first orders by goal proximity, or forward cost h(n)  A* Search orders by the sum: f(n) = g(n) + h(n) S a d b G h=5 h=5 h=2 1 5 1 1 2 h=6 h=0 c h=4 2 3 e h=1 1 Example: Teg Grenager When should A* terminate? S B A G 2 2 1 2 h = 1 h = 2 h = 0 h = 3  Should we stop when we enqueue a goal?  No: only stop when we dequeue a goal Optimality of A*: Blocking  Proof:  What could go wrong?  We’d have to have to pop a suboptimal goal off the fringe queue  This can’t happen:  Imagine a suboptimal goal G’ is on the queue  Consider any unexpanded (fringe) node n on a shortest path to optimal G  n will be popped before G … This proof assumed tree search! Where? What to do with revisited states? c = 1 100 21 2 h = 100 0 90 1 The heuristic h is clearly admissible What to do with revisited states? c = 1 100 21 2 h = 100 0 90 1 104 4+90 f = 1+100 2+1 ? If we discard this new node, then the search algorithm expands the goal node next and returns a non-optimal solution Consistency  Wait, how do we know we expand in increasing f value?  Couldn’t we pop some node n, and find its child n’ to have lower f value?  YES:  What do we need to do to fix this?  Consistency:  Real cost always exceeds reduction in heuristic A B G 3 h = 0 h = 10 g = 10 h = 8  A consistent heuristic is also admissible [Left as an exercise]  An admissible heuristic may not be consistent, but many admissible heuristics are consistent Admissibility and Consistency UCS vs A* Contours  Uniform-cost expanded in all directions  A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal Example: 8-Puzzle  What are the states?  What are the actions?  What states can I reach from the start state?  What should the costs be? 8-Puzzle I  Number of tiles misplaced?  Why is it admissible?  h(start) =  This is a relaxed- problem heuristic 8 TILES ID Average nodes expanded when optimal path has length… 2273913 3.6 x 1066,300112 …12 steps…8 steps…4 steps 8-Puzzle II  What if we had an easier 8-puzzle where any tile could slide any one direction at any time?  Total Manhattan distance  Why admissible?  h(start) = 3 + 1 + 2 + … = 18 MAN- HATTAN TILES Average nodes expanded when optimal path has length… 732512 2273913 …12 steps…8 steps…4 steps Course Scheduling  From the university’s perspective:  Set of courses {c1, c2, … cn}  Set of room / times {r1, r2, … rn}  Each pairing (ck, rm) has a cost wkm  What’s the best assignment of courses to rooms?  States: list of pairings  Actions: add a legal pairing  Costs: cost of the new pairing  Admissible heuristics? Other A* Applications  Pathing / routing problems  Resource planning problems  Robot motion planning  Language analysis  Machine translation  Speech recognition  … Summary: A*  A* uses both backward costs and (estimates of) forward costs  A* is optimal with admissible heuristics  Heuristic design is key: often use relaxed problems Example: N-Queens  What are the states?  What is the start?  What is the goal?  What are the actions?  What should the costs be? Types of Problems  Planning problems:  We want a path to a solution (examples?)  Usually want an optimal path  Incremental formulations  Identification problems:  We actually just want to know what the goal is (examples?)  Usually want an optimal goal  Complete-state formulations  Iterative improvement algorithms Example: N-Queens  Start wherever, move queens to reduce conflicts  Almost always solves large n-queens nearly instantly The Shape of an Easy Problem This and nezt several slides from Goldberg "89 The Shape of a Harder Problem 4 The Shape of a Yet Harder Problem Simulated Annealing  Idea: Escape local maxima by allowing downhill moves  But make them rarer as time goes on Simulated Annealing  Theoretical guarantee:  Stationary distribution:  If T decreased slowly enough, will converge to optimal state!  Is this an interesting guarantee?  Sounds like magic, but reality is reality:  The more downhill steps you need to escape, the less likely you are to every make them all in a row  People think hard about ridge operators which let you jump around the space in better ways Beam Search  Like greedy search, but keep K states at all times:  Variables: beam size, encourage diversity?  The best choice in MANY practical settings  Complete? Optimal?  Why do we still need optimal methods? Greedy Search Beam Search The Basic Genetic Algorithm 1. Generate random population of chromosomes 2. Until the end condition is met, create a new population by repeating following steps 1. Evaluate the fitness of each chromosome 2. Select two parent chromosomes from a population, weighed by their fitness 3. With probability pc cross over the parents to form a new offspring. 4. With probability pm mutate new offspring at each position on the chromosome. 5. Place new offspring in the new population 3. Return the best solution in current population Search problems Blind search Heuristic search: best-first and A* Construction of heuristics Local searchVariants of A* Continuous Problems  Placing airports in Romania  States: (x1,y1,x2,y2,x3,y3)  Cost: sum of squared distances to closest city
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved