The document summarizes a presentation on a paper about using multiagent bidirectional-coordinated networks (BiCNet) to develop AI agents that can learn to play combat games in StarCraft. The paper introduces BiCNet, which uses bidirectional RNNs to allow agents to communicate and coordinate their actions. Experiments show BiCNet agents outperform independent and other cooperative agents in different combat scenarios in StarCraft, developing strategies like focus firing and coordinated attacks. Visualizations of agent coordination and additional areas for investigation are also discussed.
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Multiagent Bidirectional-Coordinated Nets for Learning to Play StarCraft Combat Games
1. Presentation on “Multiagent Bidirectional-
Coordinated Nets for Learning to Play
StarCraft Combat Games”
Kiho Suh
Modulabs( ), June 22nd 2017
2. About Paper
• Published on March 29th 2017
(v1)
• Updated on June 20th 2017 (v3)
• Alibaba, University College
London
• https://arxiv.org/pdf/
1703.10069.pdf
3. Motivation
• Single-Agent AI .
(Atari, Baduk, Texas Hold’em )
• . Artificial
General Intelligence ?
• AI agent .
• real-time strategy (RTS) game “StarCraft” .
• play ,
“StarCraft” .
• Parameter space joint learning approach .
8. Related works
• Jakob Foerster, Yannis M Assael, Nando de Freitas, and
Shimon Whiteson. Learning to communicate with deep
multi-agent reinforcement learning. NIPS 2016.
• Sainbayar Sukhbaatar, Rob Fergus, et al. Learning
multiagent communication with backpropagation. NIPS
2016.
9. Differentiable Inter-Agent Learning (Jakob Foerster
et al. 2016)
• agent agent Q
RNN time-step
transfer .
•
times-step
agent transfer .
• Agent agent
,
agent
observation action
.
11. CommNet (Sainbayar Sukhbaatar et al. 2016)
• Multi-agent .
• passing the averaged message over the agent modules between layers
• fully symmetric, so lacks the ability of handle heterogeneous agent
types
13. Stochastic Game of N agents and M opponents
• S agent state space
• Ai Controller agent i action space, i ∈ [1, N]
• Bj enemy j action space, j ∈ [1, M]
• T : S x A
N
x B
M
-> S environment deterministic transition
function
• Ri : S x A
N
x B
M
-> R agent/enemy i reward function, i ∈ [1, N+M]
* agent( , ) action space .
14. Global Reward
• Continuous action space to reduce the redundancy in
modeling the large discrete action space
• Reward shaping agent
.
• Global reward: agent reward
.
15. Definition of Reward Function
• Eq. (1) controlled agent . enemy global reward .
controlled agent enemy 0 . zero-sum game!
• .
reward .
(controlled agents)
reduced health level for agent j
(enemies)
16. Minimax Game
• Controlled agent expected sum of discounted
rewards policy .
• Enemy joint policy expected sum .
optimal action-state value function
17. Sampled historical state-action pairs (s, b) of the
enemies
• Minimax Q-learning . Eq. (2) Q-function modelling
.
• fictitious play( ) enemies policy bφ
.
- AI agent fictions play . Controlled agents( ) enemies
player . Eq.(2) Q-function .
- , supervised learning
deterministic policy bφ .
• Policy network sampled historical state-action pairs(s,b) .
19. Eq. (1)
• Eq. (1) global reward Eq. (1) zero-
sum game local collaboration reward function
team collaboration
.
• agent collaboration
.
• Eq. (1) agent local reward agent
.
22. Objective as an expectation
• action space model-free policy
iteration .
• Qi gradient policy
vectorized version of deterministic policy
gradient (DPG) .
28. Design of the two networks
• Parameter agent agent
. agent
.
• agent training test agent
.
• bi-directional RNN agent .
• Full dependency among agents because the gradients from all the actions in
Eq. (9) are efficiently propagated through the entire networks
• Not fully symmetric, and maintaining certain social conventions and roles by
fixing the order of the agents that join the RNN. Solving any possible tie
between multiple optimal joint actions
30. Experiments
• easy combats
- {3 Marines vs. 1 Super Zergling}
- {3 Wraiths vs. 3 Mutalisks}
• difficult combats
- {5 Marines vs. 5 Marines}
- {15 Marines vs. 16 Marines}
- {20 Marines vs. 30 Zerglings}
- {10 Marines vs. 13 Zerglings}
- {15 Wraiths vs. 17 Wraiths}
• heterogeneous combats
- {2 Dropships and 2 Tanks vs. 1 Ultralisk}
Marine Zergling
Wraith Mutalisk
Dropship Ultralisk
Siege Tank
all images are from
http://starcraft.wikia.com/wiki/
31. Baselines
• Independent controller (IND): agent .
.
• Fully-connected (FC): agent fully-connected.
.
• CommNet: agent multi-agent
• GreedyMDP with Episodic Zero-Order Optimization (GMEZO):
conducting collaborations through a greedy update over MDP
agents, as well as adding episodic noises in the parameter
space for explorations
32. Action space for each individual agent
• 3 dimensional real vector
• 1st dimension: ranging from -1 to 1
- Greater than or equal to 0, agent attacks
- otherwise, agent moves
• 2nd and 3rd dimension: degree and distance, collectively
indicating the destination that the agent should move or
attack from its current location
34. Simple Experiment
• tested on 100 independent games
• skip frame: how many frames we should skip for controlling the agents actions
• when batch_size is 32 (highest mean Q-value after 600k training steps) and skip_frame
is 2 (highest mean Q-value after between 300k and 600k) has the highest winning rate.
35. Simple Experiment
• Letting 4~6 agents work together as a group can efficiently control individual agents while
maximizing damage output.
• Fig 3, 4~5 as group size would help achieve best performance.
• Fig 4, the convergence speed by plotting the winning rate against the number of training
episodes.
36. Performance Comparison
• BicNet is trained over 100k steps
• measuring the performance as the average winning rate on 100 test
games
• when the number of agents goes beyond 10, the margin of
performance between BiCNet and the second best starts to increase
37. Performance Comparison
• “5M vs. 5M”, key factor to win is to “focus fire” on the weak.
• As BicNet has built-in design for dynamic grouping, small number of agents (such as
“5M vs. 5M”) does not suffice to show the advantages of BiCNet on large-scale
collaborations.
• For “5M vs. 5M”, BicNet only needs 10 combats before learning the idea of “focus
fire,” achieving 85% win rate, whereas CommNet needs at least 50 episodes with a
much lower winning rate
38. Visualization
• “3 Marines vs. 1 Super Zergling” when the coordinated cover attack has been
learned.
• Collected values in the last hidden layer of the well-trained critic network over 10k
steps.
• t-SNE
39. Strategies to Experiment
• Move without collision
• task. .
• Hit and run
• task. .
• Cover attack
• task. .
• Focus fire without overkill
• task. .
• Collaboration between heterogeneous agents
• task. .
41. Coordinated moves without collision (3 Marines
(ours) vs. 1 Super Zergling)
• The agents move in a rather uncoordinated way, particularly, when two agents are close to each
other, i.e. one agent may unintentionally block the other’s path.
• After 40k steps in around 50 episodes, the number of collisions reduces dramatically.
43. Hit and Run tactics (3 Marines (ours) vs. 1 Zealot)
Move agents away if under attack, and fight back when feel
safe again.
44. Coordinated Cover Attack (4 Dragoons (ours) vs. 2
Ultralisks)
• Let one agent draw fire or attention from the enemies.
• At the meantime, other agents can take advantage of the time or distance
gap to cause more harms.
46. Focus fire without overkill (15 Marines (ours) vs. 16
Marines)
• How to efficiently allocate the attacking resources becomes important.
• The grouping design in the policy network serves as the key factor for BiCNet to learn
“focus fire without overkill.”
• Even with the decreasing of our unit number, each group can be dynamically resigned
to make sure that the 3~5 units focus on attacking on same enemy.
47. Collaborations between heterogeneous agents (2
Dropships and 2 tanks vs. 1 Ultralisk)
• A wide variety of types of units in Starcraft
• Can be easily implemented in BicNet
48. Further to Investigate after this paper
• Strong correlation between the specified reward and the
learned policies
• How the policies are communicated over the networks
among agents
• Whether there is a specific language that may have
emerged
• Nash Equilibrium when both sides are played by deep
multi agent models.