Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of "artificial intelligence", having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), self-driving cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.
In computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic which attempts to predict how close a partial solution is to a complete solution (goal state). But in beam search, only a predetermined number of best partial solutions are kept as candidates.
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent.
Find me on:
AFCIT
http://www.afcit.xyz
YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw
Google Plus
https://plus.google.com/u/0/+AhmedGadIT
SlideShare
https://www.slideshare.net/AhmedGadFCIT
LinkedIn
https://www.linkedin.com/in/ahmedfgad/
ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13
Academia
https://www.academia.edu/
Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en
Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/
ORCID
https://orcid.org/0000-0003-1978-8574
StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad
Twitter
https://twitter.com/ahmedfgad
Facebook
https://www.facebook.com/ahmed.f.gadd
Pinterest
https://www.pinterest.com/ahmedfgad/
On National Teacher Day, meet the 2024-25 Kenan Fellows
AI Heuristic Search - Beam Search - Simulated Annealing
1. AI Heuristic Search
Beam Search - Simulated Annealing
MENOUFIA UNIVERSITY
FACULTY OF COMPUTERS AND INFORMATION
ALL DEPARTMENTS
ARTIFICIAL INTELLIGENCE
المنوفية جامعة
والمعلومات الحاسبات كلية
األقسام جميع
الذكاءاإلصطناعي
المنوفية جامعة
Ahmed Fawzy Gad
ahmed.fawzy@ci.menofia.edu.eg
3. Hill Climbing Vs. Beam Search
• Hill climbing just explores all nodes in one branch until goal found or
not being able to explore more nodes.
• Beam search explores more than one path together. A factor k is used
to determine the number of branches explored at a time.
• If k=2, then two branches are explored at a time. For k=4, four
branches are explored simultaneously.
• The branches selected are the best branches based on the used
heuristic evaluation function.
21. Simulated Annealing
Steps
1. Select a start node (root node).
2. Randomly select a child of the current node, calculate a value reflecting
how good such child is like 𝒗𝒂𝒍𝒖𝒆 𝒏𝒐𝒅𝒆 = −𝒉𝒆𝒖𝒓𝒊𝒔𝒕𝒊𝒄(𝒏𝒐𝒅𝒆).
3. Select the child if it is better than the current node. Else try another
child.
A node is better than the current node if 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆 𝒄𝒖𝒓𝒓𝒆𝒏𝒕 > 𝟎.
Else if 𝚫𝑬 < 𝟎, then try to find another child.
4. If the child was not better than the current node then it will be selected
with probability equal to 𝐩 = 𝒆
𝚫𝑬
𝑻
where 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆[𝒄𝒖𝒓𝒓𝒆𝒏𝒕]
𝑻 is a temperature.
5. Stop if no improvement can be found or after a fixed time.
30. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
31. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
32. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
33. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
𝐯𝐚𝐥𝐮𝐞 𝒂 𝟏𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒂 𝟏𝟎 = −𝟏𝟎
34. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑
35. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑
∵ 𝚫𝐄 > 𝟎
∴ 𝒇 𝟕 will be selected with probability 1
43. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node𝒆 𝟓is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
44. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node𝒆 𝟓is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
45. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
46. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
47. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
48. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
∵ 𝚫𝐄 > 𝟎
∴ 𝒆 𝟓 will be selected with probability 1
55. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
56. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
57. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
58. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
59. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
𝚫𝐄 = −𝟔 − −𝟓 = −𝟏
60. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
∵ 𝚫𝐄 < 𝟎
∴ 𝒊 𝟔 can be selected with
probability 𝐩 = 𝒆
𝚫𝑬
𝑻
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
61. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒆 𝟓 is
better than current node
∵ 𝚫𝐄 < 𝟎
∴ 𝒊 𝟔 can be selected with
probability 𝐩 = 𝒆
𝚫𝑬
𝑻
𝐩 = 𝒆
−𝟏
𝟏𝟎= 𝒆
−𝟏
𝟏𝟎=.905
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
62. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Because the only child of 𝒆 𝟓 is 𝒊 𝟔
then it will be selected even if its
probability is not 1.
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
68. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
69. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
70. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
71. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
72. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
73. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
74. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
𝚫𝐄 = 𝟎 − −𝟔 = +𝟔
75. Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
𝚫𝐄 = 𝟎 − −𝟔 = +𝟔
∵ 𝚫𝐄 > 𝟎
∴ 𝒌 𝟎 will be selected with
probability 1