SlideShare a Scribd company logo
1 of 78
Download to read offline
AI Heuristic Search
Beam Search - Simulated Annealing
MENOUFIA UNIVERSITY
FACULTY OF COMPUTERS AND INFORMATION
ALL DEPARTMENTS
ARTIFICIAL INTELLIGENCE
‫المنوفية‬ ‫جامعة‬
‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬
‫األقسام‬ ‫جميع‬
‫الذكاء‬‫اإلصطناعي‬
‫المنوفية‬ ‫جامعة‬
Ahmed Fawzy Gad
ahmed.fawzy@ci.menofia.edu.eg
Beam Search
Hill Climbing Vs. Beam Search
• Hill climbing just explores all nodes in one branch until goal found or
not being able to explore more nodes.
• Beam search explores more than one path together. A factor k is used
to determine the number of branches explored at a time.
• If k=2, then two branches are explored at a time. For k=4, four
branches are explored simultaneously.
• The branches selected are the best branches based on the used
heuristic evaluation function.
Beam Search, k=2
Goal – Node K a ---
Current Children
Beam Search
Goal – Node K a ---
Current Children
a
Beam Search
Goal – Node K a ---
Current Children
a
a
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
a
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
aBest k
Successors
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
aBest k
Successors
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
Best k
Successors
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
Best k
Successors
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
k g
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
k g
Beam Search
Goal – Node K a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
a
j
𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
k g
GOAL
Simulated Annealing
Simulated Annealing
Steps
1. Select a start node (root node).
2. Randomly select a child of the current node, calculate a value reflecting
how good such child is like 𝒗𝒂𝒍𝒖𝒆 𝒏𝒐𝒅𝒆 = −𝒉𝒆𝒖𝒓𝒊𝒔𝒕𝒊𝒄(𝒏𝒐𝒅𝒆).
3. Select the child if it is better than the current node. Else try another
child.
A node is better than the current node if 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆 𝒄𝒖𝒓𝒓𝒆𝒏𝒕 > 𝟎.
Else if 𝚫𝑬 < 𝟎, then try to find another child.
4. If the child was not better than the current node then it will be selected
with probability equal to 𝐩 = 𝒆
𝚫𝑬
𝑻
where 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆[𝒄𝒖𝒓𝒓𝒆𝒏𝒕]
𝑻 is a temperature.
5. Stop if no improvement can be found or after a fixed time.
Simulated Annealing
Example – 𝑻=10 a
Current Children
Simulated Annealing
Starting from Node a a
Current Children
Simulated Annealing
Starting from Node a a
Current Children
a
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 > 𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
𝐯𝐚𝐥𝐮𝐞 𝒂 𝟏𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒂 𝟏𝟎 = −𝟏𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒇 𝟕 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑
∵ 𝚫𝐄 > 𝟎
∴ 𝒇 𝟕 will be selected with probability 1
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 > 𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node𝒆 𝟓is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node𝒆 𝟓is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
∵ 𝚫𝐄 > 𝟎
∴ 𝒆 𝟓 will be selected with probability 1
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Check if next node 𝒊 𝟔 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
𝚫𝐄 = −𝟔 − −𝟓 = −𝟏
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒆 𝟓 is
better than current node
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
∵ 𝚫𝐄 < 𝟎
∴ 𝒊 𝟔 can be selected with
probability 𝐩 = 𝒆
𝚫𝑬
𝑻
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Check if next node 𝒆 𝟓 is
better than current node
∵ 𝚫𝐄 < 𝟎
∴ 𝒊 𝟔 can be selected with
probability 𝐩 = 𝒆
𝚫𝑬
𝑻
𝐩 = 𝒆
−𝟏
𝟏𝟎= 𝒆
−𝟏
𝟏𝟎=.905
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
Because the only child of 𝒆 𝟓 is 𝒊 𝟔
then it will be selected even if its
probability is not 1.
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Randomly Select a
Child
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
𝚫𝐄 = 𝟎 − −𝟔 = +𝟔
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
Check if next node 𝒌 𝟎 is
better than current node
𝚫𝐄 > 𝟎
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
𝚫𝐄 = 𝟎 − −𝟔 = +𝟔
∵ 𝚫𝐄 > 𝟎
∴ 𝒌 𝟎 will be selected with
probability 1
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
k
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
k
Simulated Annealing
a ---
Current Children
a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
f 𝒆 𝟓, 𝒈 𝟑
e 𝒊 𝟔
i 𝒌 𝟎
k
GOAL

More Related Content

What's hot

Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
Luigi Ceccaroni
 
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfUNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
JenishaR1
 
Propositional logic
Propositional logicPropositional logic
Propositional logic
Rushdi Shams
 

What's hot (20)

Uninformed search /Blind search in AI
Uninformed search /Blind search in AIUninformed search /Blind search in AI
Uninformed search /Blind search in AI
 
AI_Session 10 Local search in continious space.pptx
AI_Session 10 Local search in continious space.pptxAI_Session 10 Local search in continious space.pptx
AI_Session 10 Local search in continious space.pptx
 
A* Search Algorithm
A* Search AlgorithmA* Search Algorithm
A* Search Algorithm
 
Local beam search example
Local beam search exampleLocal beam search example
Local beam search example
 
I. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmI. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithm
 
Local search algorithm
Local search algorithmLocal search algorithm
Local search algorithm
 
Uninformed search
Uninformed searchUninformed search
Uninformed search
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
 
AI search techniques
AI search techniquesAI search techniques
AI search techniques
 
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
 
Backtracking
Backtracking  Backtracking
Backtracking
 
AI Lecture 3 (solving problems by searching)
AI Lecture 3 (solving problems by searching)AI Lecture 3 (solving problems by searching)
AI Lecture 3 (solving problems by searching)
 
Heuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligenceHeuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligence
 
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfUNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
 
Adversarial search with Game Playing
Adversarial search with Game PlayingAdversarial search with Game Playing
Adversarial search with Game Playing
 
Lecture 16 memory bounded search
Lecture 16 memory bounded searchLecture 16 memory bounded search
Lecture 16 memory bounded search
 
Lecture 9 Markov decision process
Lecture 9 Markov decision processLecture 9 Markov decision process
Lecture 9 Markov decision process
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Propositional logic
Propositional logicPropositional logic
Propositional logic
 
Uninformed Search technique
Uninformed Search techniqueUninformed Search technique
Uninformed Search technique
 

More from Ahmed Gad

Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Ahmed Gad
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Ahmed Gad
 
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression AnalysisICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
Ahmed Gad
 
Brief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNsBrief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNs
Ahmed Gad
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Ahmed Gad
 

More from Ahmed Gad (20)

ICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic AlgorithmICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic Algorithm
 
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
 
Python for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd EditionPython for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd Edition
 
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
 
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded ScenesM.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
 
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
 
Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
 
Avoid Overfitting with Regularization
Avoid Overfitting with RegularizationAvoid Overfitting with Regularization
Avoid Overfitting with Regularization
 
Genetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step ExampleGenetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step Example
 
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression AnalysisICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
 
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-StepBackpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
 
Computer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and GradientComputer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and Gradient
 
Python for Computer Vision - Revision
Python for Computer Vision - RevisionPython for Computer Vision - Revision
Python for Computer Vision - Revision
 
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia CourseAnime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
 
Brief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNsBrief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNs
 
Operations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by ExampleOperations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by Example
 
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingMATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
 
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
 

Recently uploaded

Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Recently uploaded (20)

How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxHMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 

AI Heuristic Search - Beam Search - Simulated Annealing

  • 1. AI Heuristic Search Beam Search - Simulated Annealing MENOUFIA UNIVERSITY FACULTY OF COMPUTERS AND INFORMATION ALL DEPARTMENTS ARTIFICIAL INTELLIGENCE ‫المنوفية‬ ‫جامعة‬ ‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬ ‫األقسام‬ ‫جميع‬ ‫الذكاء‬‫اإلصطناعي‬ ‫المنوفية‬ ‫جامعة‬ Ahmed Fawzy Gad ahmed.fawzy@ci.menofia.edu.eg
  • 3. Hill Climbing Vs. Beam Search • Hill climbing just explores all nodes in one branch until goal found or not being able to explore more nodes. • Beam search explores more than one path together. A factor k is used to determine the number of branches explored at a time. • If k=2, then two branches are explored at a time. For k=4, four branches are explored simultaneously. • The branches selected are the best branches based on the used heuristic evaluation function.
  • 4. Beam Search, k=2 Goal – Node K a --- Current Children
  • 5. Beam Search Goal – Node K a --- Current Children a
  • 6. Beam Search Goal – Node K a --- Current Children a a
  • 7. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 a
  • 8. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 aBest k Successors
  • 9. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 aBest k Successors
  • 10. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j
  • 11. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j
  • 12. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j
  • 13. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎
  • 14. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓
  • 15. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 Best k Successors
  • 16. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 Best k Successors
  • 17. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 k g
  • 18. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 k g
  • 19. Beam Search Goal – Node K a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f a j 𝒈 𝟑, 𝒆 𝟓 𝒌 𝟎 𝒌 𝟎, 𝒈 𝟑, 𝒆 𝟓 k g GOAL
  • 21. Simulated Annealing Steps 1. Select a start node (root node). 2. Randomly select a child of the current node, calculate a value reflecting how good such child is like 𝒗𝒂𝒍𝒖𝒆 𝒏𝒐𝒅𝒆 = −𝒉𝒆𝒖𝒓𝒊𝒔𝒕𝒊𝒄(𝒏𝒐𝒅𝒆). 3. Select the child if it is better than the current node. Else try another child. A node is better than the current node if 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆 𝒄𝒖𝒓𝒓𝒆𝒏𝒕 > 𝟎. Else if 𝚫𝑬 < 𝟎, then try to find another child. 4. If the child was not better than the current node then it will be selected with probability equal to 𝐩 = 𝒆 𝚫𝑬 𝑻 where 𝚫𝑬 = 𝒗𝒂𝒍𝒖𝒆 𝒏𝒆𝒙𝒕 − 𝒗𝒂𝒍𝒖𝒆[𝒄𝒖𝒓𝒓𝒆𝒏𝒕] 𝑻 is a temperature. 5. Stop if no improvement can be found or after a fixed time.
  • 22. Simulated Annealing Example – 𝑻=10 a Current Children
  • 23. Simulated Annealing Starting from Node a a Current Children
  • 24. Simulated Annealing Starting from Node a a Current Children a
  • 25. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎
  • 26. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎Randomly Select a Child
  • 27. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎Randomly Select a Child
  • 28. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node
  • 29. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 > 𝟎
  • 30. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
  • 31. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎)
  • 32. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
  • 33. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕 𝐯𝐚𝐥𝐮𝐞 𝒂 𝟏𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒂 𝟏𝟎 = −𝟏𝟎
  • 34. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑
  • 35. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒇 𝟕 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 − 𝐯𝐚𝐥𝐮𝐞(𝒂 𝟏𝟎) 𝚫𝐄 = −𝟕 − −𝟏𝟎 = +𝟑 ∵ 𝚫𝐄 > 𝟎 ∴ 𝒇 𝟕 will be selected with probability 1
  • 36. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f
  • 37. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f
  • 38. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑
  • 39. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Randomly Select a Child
  • 40. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Randomly Select a Child
  • 41. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node
  • 42. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 > 𝟎
  • 43. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node𝒆 𝟓is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
  • 44. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node𝒆 𝟓is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
  • 45. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕)
  • 46. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝐯𝐚𝐥𝐮𝐞 𝒇 𝟕 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒇 𝟕 = −𝟕
  • 47. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝚫𝐄 = −𝟓 − −𝟕 = +𝟐
  • 48. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝚫𝐄 = −𝟓 − −𝟕 = +𝟐 ∵ 𝚫𝐄 > 𝟎 ∴ 𝒆 𝟓 will be selected with probability 1
  • 49. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e
  • 50. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e
  • 51. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  • 52. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Randomly Select a Child
  • 53. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Randomly Select a Child
  • 54. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node
  • 55. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎
  • 56. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓)
  • 57. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓) 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
  • 58. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓) 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒆 𝟓 = −𝟓 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
  • 59. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 Check if next node 𝒊 𝟔 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 − 𝐯𝐚𝐥𝐮𝐞(𝒆 𝟓) 𝚫𝐄 = −𝟔 − −𝟓 = −𝟏
  • 60. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒆 𝟓 is better than current node 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒆 𝟓 − 𝐯𝐚𝐥𝐮𝐞(𝒇 𝟕) 𝚫𝐄 = −𝟓 − −𝟕 = +𝟐 ∵ 𝚫𝐄 < 𝟎 ∴ 𝒊 𝟔 can be selected with probability 𝐩 = 𝒆 𝚫𝑬 𝑻 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  • 61. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Check if next node 𝒆 𝟓 is better than current node ∵ 𝚫𝐄 < 𝟎 ∴ 𝒊 𝟔 can be selected with probability 𝐩 = 𝒆 𝚫𝑬 𝑻 𝐩 = 𝒆 −𝟏 𝟏𝟎= 𝒆 −𝟏 𝟏𝟎=.905 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  • 62. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 Because the only child of 𝒆 𝟓 is 𝒊 𝟔 then it will be selected even if its probability is not 1. f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔
  • 63. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i
  • 64. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i
  • 65. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎
  • 66. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Randomly Select a Child
  • 67. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Randomly Select a Child
  • 68. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node
  • 69. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎
  • 70. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭)
  • 71. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔)
  • 72. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎
  • 73. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔
  • 74. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔 𝚫𝐄 = 𝟎 − −𝟔 = +𝟔
  • 75. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 Check if next node 𝒌 𝟎 is better than current node 𝚫𝐄 > 𝟎 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝐧𝐞𝐱𝐭 − 𝐯𝐚𝐥𝐮𝐞(𝐜𝐮𝐫𝐫𝐞𝐧𝐭) 𝚫𝐄 = 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 − 𝐯𝐚𝐥𝐮𝐞(𝒊 𝟔) 𝐯𝐚𝐥𝐮𝐞 𝒌 𝟎 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒌 𝟎 = 𝟎 𝐯𝐚𝐥𝐮𝐞 𝒊 𝟔 = −𝐡𝐞𝐮𝐫𝐢𝐬𝐭𝐢𝐜 𝒊 𝟔 = −𝟔 𝚫𝐄 = 𝟎 − −𝟔 = +𝟔 ∵ 𝚫𝐄 > 𝟎 ∴ 𝒌 𝟎 will be selected with probability 1
  • 76. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 k
  • 77. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 k
  • 78. Simulated Annealing a --- Current Children a 𝒇 𝟕, 𝒋 𝟖,𝒃 𝟏𝟎 f 𝒆 𝟓, 𝒈 𝟑 e 𝒊 𝟔 i 𝒌 𝟎 k GOAL