Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

AI Heuristic Search - Beam Search - Simulated Annealing

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of "artificial intelligence", having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), self-driving cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.
In computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic which attempts to predict how close a partial solution is to a complete solution (goal state). But in beam search, only a predetermined number of best partial solutions are kept as candidates.
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent.

Find me on:
AFCIT
http://www.afcit.xyz

YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw

Google Plus
https://plus.google.com/u/0/+AhmedGadIT

SlideShare
https://www.slideshare.net/AhmedGadFCIT

LinkedIn
https://www.linkedin.com/in/ahmedfgad/

ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13

Academia
https://www.academia.edu/

Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en

Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/

ORCID
https://orcid.org/0000-0003-1978-8574

StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad

Twitter
https://twitter.com/ahmedfgad

Facebook
https://www.facebook.com/ahmed.f.gadd

Pinterest
https://www.pinterest.com/ahmedfgad/

  • Be the first to comment

AI Heuristic Search - Beam Search - Simulated Annealing

  1. 1. AI Heuristic Search Beam Search - Simulated Annealing MENOUFIA UNIVERSITY FACULTY OF COMPUTERS AND INFORMATION ALL DEPARTMENTS ARTIFICIAL INTELLIGENCE ‫المنوفية‬ ‫جامعة‬ ‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬ ‫األقسام‬ ‫جميع‬ ‫الذكاء‬‫اإلصطناعي‬ ‫المنوفية‬ ‫جامعة‬ Ahmed Fawzy Gad ahmed.fawzy@ci.menofia.edu.eg
  2. 2. Beam Search
  3. 3. Hill Climbing Vs. Beam Search • Hill climbing just explores all nodes in one branch until goal found or not being able to explore more nodes. • Beam search explores more than one path together. A factor k is used to determine the number of branches explored at a time. • If k=2, then two branches are explored at a time. For k=4, four branches are explored simultaneously. • The branches selected are the best branches based on the used heuristic evaluation function.
  4. 4. Beam Search, k=2 Goal – Node K a --- Current Children
  5. 5. Beam Search Goal – Node K a --- Current Children a
  6. 6. Beam Search Goal – Node K a --- Current Children a a
  7. 7. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 a
  8. 8. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 aBest k Successors
  9. 9. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 aBest k Successors
  10. 10. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j
  11. 11. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j
  12. 12. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j
  13. 13. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
  14. 14. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
  15. 15. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 Best k Successors
  16. 16. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 Best k Successors
  17. 17. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 k g
  18. 18. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 k g
  19. 19. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 k g GOAL
  20. 20. Simulated Annealing
  21. 21. Simulated Annealing Steps 1. Select a start node (root node). 2. Randomly select a child of the current node, calculate a value reflecting how good such child is like 𝒗𝒂𝒍𝒖𝒆 𝒏𝒐𝒅𝒆 = −𝒉𝒆𝒖𝒓𝒊𝒔𝒕𝒊𝒄(𝒏𝒐𝒅𝒆). 3. Select the child if it is better than the current node. Else try another child. A node is better than the current node if 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆 𝒄𝒖𝒓𝒓𝒆𝒏𝒕 > 𝟎. Else if 𝚫𝑬 < 𝟎, then try to find another child. 4. If the child was not better than the current node then it will be selected with probability equal to 𝐩 = 𝒆 𝚫𝑬 𝑻 where 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆[𝒄𝒖𝒓𝒓𝒆𝒏𝒕] 𝑻 is a temperature. 5. Stop if no improvement can be found or after a fixed time.
  22. 22. Simulated Annealing Example – 𝑻=10 a Current Children
  23. 23. Simulated Annealing Starting from Node a a Current Children
  24. 24. Simulated Annealing Starting from Node a a Current Children a
  25. 25. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
  26. 26. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎Randomly Select a Child
  27. 27. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎Randomly Select a Child
  28. 28. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node
  29. 29. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 > 𝟎
  30. 30. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
  31. 31. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
  32. 32. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
  33. 33. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕 𝐯𝐚𝐥𝐮𝐞 𝒂 𝟏𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒂 𝟏𝟎 = −𝟏𝟎
  34. 34. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑
  35. 35. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑 ∵ 𝚫𝐄 > 𝟎 ∴ 𝒇 𝟕 will be selected with probability 1
  36. 36. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f
  37. 37. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f
  38. 38. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑
  39. 39. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Randomly Select a Child
  40. 40. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Randomly Select a Child
  41. 41. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node
  42. 42. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 > 𝟎
  43. 43. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node𝒆 𝟓is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
  44. 44. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node𝒆 𝟓is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
  45. 45. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
  46. 46. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
  47. 47. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
  48. 48. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝚫𝐄 = −𝟓 − −𝟕 = +𝟐 ∵ 𝚫𝐄 > 𝟎 ∴ 𝒆 𝟓 will be selected with probability 1
  49. 49. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e
  50. 50. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e
  51. 51. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  52. 52. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Randomly Select a Child
  53. 53. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Randomly Select a Child
  54. 54. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node
  55. 55. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎
  56. 56. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
  57. 57. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓) 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
  58. 58. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓) 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
  59. 59. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓) 𝚫𝐄 = −𝟔 − −𝟓 = −𝟏
  60. 60. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝚫𝐄 = −𝟓 − −𝟕 = +𝟐 ∵ 𝚫𝐄 < 𝟎 ∴ 𝒊 𝟔 can be selected with probability 𝐩 = 𝒆 𝚫𝑬 𝑻 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  61. 61. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒆 𝟓 is better than current node ∵ 𝚫𝐄 < 𝟎 ∴ 𝒊 𝟔 can be selected with probability 𝐩 = 𝒆 𝚫𝑬 𝑻 𝐩 = 𝒆 −𝟏 𝟏𝟎= 𝒆 −𝟏 𝟏𝟎=.905 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  62. 62. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Because the only child of 𝒆 𝟓 is 𝒊 𝟔 then it will be selected even if its probability is not 1. f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  63. 63. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i
  64. 64. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i
  65. 65. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎
  66. 66. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Randomly Select a Child
  67. 67. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Randomly Select a Child
  68. 68. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node
  69. 69. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎
  70. 70. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
  71. 71. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
  72. 72. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
  73. 73. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
  74. 74. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔 𝚫𝐄 = 𝟎 − −𝟔 = +𝟔
  75. 75. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔 𝚫𝐄 = 𝟎 − −𝟔 = +𝟔 ∵ 𝚫𝐄 > 𝟎 ∴ 𝒌 𝟎 will be selected with probability 1
  76. 76. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 k
  77. 77. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 k
  78. 78. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 k GOAL

×