Download the full presentation here: https://www.guerrilla-games.com/read/beyond-killzone-creating-new-ai-systems-for-horizon-zero-dawn
Abstract : Having established AI tech in your studio is great… until your next game adds demands well over and above what your existing systems were designed to handle. This session describes the changes that Guerilla made to switch from having to support a single human enemy in closed corridor spaces to a game with more than 25 wildly different characters in a large open world. Specifically, the lecture explains how they changed the navigation and animation systems that made the characters in Horizon Zero Dawn move realistically. Additionally, they detail how these changes impacted their workflow throughout the project.
Hello everybody, I want to thank you all for being here, I really appreciate it. My name is julian berteling, I am a AI Programmer at Guerrilla Games. Im here to talk to you today about some the changes that we made for our latest game, Horizon: Zero Dawn. But before I talk about that I want to begin this talk by starting out with this guy.
How do we make these characters move naturally? Because prior to Horizon we only had experience making shooters. Where human type characters would only occasionally move from cover to cover, Shortly after which they would shot in the face by the player. So animation fluidity and natural movement was less of a priority. So when we first started prototyping gameplay for Horizon
We ran into some issues…. [CLICK] That lego like creature in the distance is what would eventually become the thunderjaw. [WAIT] And these are early versions of the grazer. Also very early in development. Now if you focus on how they move around. You can see that they avoid the boulders. But the movement doesnt look natural at all. There is a lot of footsliding and snappy rotations of the entire body.
Now this was obviously prototype code, But the philosophy back then was that AI had more of a dictative role in movement. AI would set out exact trajectory. character control code/animation try make it look as good as it could. So very strict optimized movement, but not natural at all. So we had to perform lot changes multiple AI systems to support natural way of controlling characters.
These systems that needed to be changed divided into two main categories First being Navigation and Planning. [CLICK] Because already take the character account when planning movement. So this category subjects such as Navigation Mesh. NPC avoidance and Path Following. After that I will discus changes allowed more natural Locomotion. [CLICK] We improved our animation sampling system, new Melee combat system, big changes character look around. So first up is our navigation mesh. But to understand decision made, have to understand where we are coming from.
So back during killzone, we used waypoints as our navigation representation. Graph generated offline, and used for pathfinding, position picking and the covermap. variable radius, represented safe space were allowed to stand. radius would shrink intersect with obstacle. Representing safe space led to some issues, even before we started working on Horizon.
Because large clusters of waypoints connected by a small amount of links. And shrinking safe space uniform way, easily be blocked.
We also knew the world of Horizon was going to be vast and dynamic. And so disk space requirement for the navigation data became a serious concern. [CLICK] designers would be working on the same environment, creating workflow issues keep the navigation data up to date. A lot of obstacle dynamically placed [CLICK] generating offline not an option anymore. We also knew, that navigational representation would be different between a number of characters. [CLICK] Watcher move through a forest with ease. But the big Thunderjaw should not. If need reach other side, more natural to go around, Rather clip through trees
And so we switched to runtime navigation mesh generation. We use the recast and detour library as the base, but lot of time/effort adjusting and optimizing it in order to be able to it use it at runtime. fixed issues dealing with a large and dynamic open world. But need support large physical differences between characters, so no just one mesh [CLICK] But up to six: the small, medium and large robots, over water for robots that can swim, one for human type characters, one for the player mount.
These navmesh objects range from danger and stealth areas to destructible obstacles. represented on the navmesh itself, can add cost to the polygons they overlap Influence on navigational queries, This influence is situational and depends on a characters current behavior Because might be obstacle for one character, might not considered as such by another.
So to summarize, switched from waypoints to navmesh, allowed us to more accurately describe the navigational safe space available to our characters. [CLICK] We also switched from offline to runtime generation. So support the large and dynamic open world [CLICK] And in order to enhance the sense of realism, we had to take into account the physical differences of our characters So we generate multiple meshes, support large differences between characters [CLICK] And we also set up our obstacles in such a way that characters themselves can decide how they should take into account. [CLICK] So that explains avoidance of static obstacle.
Next up is dynamic obstacle avoidance. Back in Killzone, we had fewer characters inhibiting the same space as we did in Horizon. [CLICK] And because the game was a cover based corridor shooter, characters would only move for short distances between cover, if necessary. [CLICK] If characters were moving however, they were forced to stay within their pre-planned corridor path. Because didn’t have navmesh yet [CLICK] So our dynamic avoidance implementation back then was quite simple:
And one requirement that bit unique to our quest was our characters are more elongated than humans, and so cant be accurately described by a circle within a 2d avoidance solution. Because using only circles still encompass entire character mean some characters form a very large clearance around themselves. As you can see here with these two Watchers.
So we ended up using Velocity Obstacles. Which is relatively simple, tried and proven solution for dynamic avoidance. [CLICK] It also happens to be shape agnostic as i will explain in the next slide. [CLICK] It can however be quite expensive when dealing with a lot of objects, so we looked at different optimizations but they never worked well with rectangular obstacles. [CLICK] So our solution is to only avoid for the 5 most imminent threats to the character.
These threats are picked not based on distance, but based on broad approximation on time till collision, only 5 obstacles lowest time taken into account.
When building the velocity obstacles, take the obstacle shapes, use a Minkowski addition combine two into one polygon. Used as input for the creation of the velocity obstacles, this way we can support both rectangular, as well as circular shaped types of characters.
After creation of all velocity obstacles, Generate a set of velocities fall outside of these obstacles. scored based on a scoring function. numerous factors, picked velocity deviates, deviates from the desired speed, within our turning direction or not, and we use other factors as well.
This target velocity used as input in our path smoothing when character needs to avoid, character turns away from its path, but will always resume following it in a smooth matter, as soon as that need to avoid is resolved.
Looking back we were pretty happy with the results, especially for robots, took optimal path to avoid. This looked convincing. however less happy with human characters. result looked too robotic. [CLICK] We looked at lots footage of humans avoiding. Conclusion people are actually quite bad at avoiding Less than perfect decisions. Never optimal path, sometimes fail to avoid. small errors we stop and switch directions, or brush each others shoulders. [CLICK] Optimal isn’t always natural, research other solutions, machine learning to create a model that imitates real life avoidance patterns. embrace failure rather than avoid it. small errors when avoiding, but we perform graceful recoveries. Emulating improve the sense of realism. [CLICK] So explanation was bit short, My colleague Carles Ros Martinez was responsible for the avoidance system, [CLICK] He is writing an extensive article on it in the upcoming Game AI Pro book.
Optimal vs Natural deal with concerning path following. we used a pretty standard approach for path following: A* initial path on the waypoint grid, a string pulling algorithm in combination waypoint safe radius, gave us the shortest path possible. [CLICK video] This was then traversed precisely. Turn use canned animation matched angle, Or simply rotate the character body by code.
So we implemented Bezier Path smoothing, smoothed so it had tangent continuity, the direction of the path changes continuously. Making it easier for all characters to follow it. Method was tried and done before, a lot literature on it both online and in game AI books. So it made sense for us to go this route and implement that as well. For our approach we used cubic Bezier curves.
Which consists out of 4 control points, first and last dictate the start and end and the other 2 control the tangency along the curve, which are called the tangent handles. Place curve on each path segment, and aligning the tangent handles with previous and next, you get a continuous smooth path.
first curve has its first tangency handler directed in either the characters current movement direction, or rotated towards direction provided by the avoidance system. it smooths out its current velocity towards the next point in the path. The length of that first handle was based on entities velocity, but could be adjusted using an optional modifier, Set higher for larger creatures, caused bigger turns, because the turn radius of larger creatures is often larger as well.
In order to make sure the path was safe to traverse we would clamp the tangent handles to the NavMesh. And then check if the line between the handlers was also onobstructed. This in essence creates a safe hull on the navmesh, making very likely that the resulting trajectory contained within that hull was safe as well
The smoothed trajectory deviated a lot from the originally planned path. becomes more likely that it will fall outside of the Navigation Mesh. we did add clamping that I described earlier, but only clamps the length of the tangency handlers. If the line between 2 tangency handlers was blocked, then we couldn’t ensure trajectory was safe and the smoothing would thus fail. Forcing to perform quickturn on spot, or stop and directional start
Another issue take into account a characters max turn curvature, Which resulted in footsliding. Because calculating the maximum curvature of a bezier curve requires a numeric solution, and the same goes if you want to limit the curvature. Each iteration would have to be checked on the NavMesh, less than optimal. Considering for large groups. And so we decided not to do that
And because of these reasons Characters perform turns that they did not have the right animation for. Which caused some snappy turns and footsliding. All of these points led us to try and find a new path following solution that would fix these issues.
But before i talk about that, first a disclaimer: did not make it in Horizon Zero Dawn, it wasn't finished on time ship with the bezier type smoothing. But it is still wanted to share with all of you. Improvement over Bezier type smoothing. You might want to use it as well.Issues due to not being able to limit the system to a max curvature. So to fix that, real world examples. a need for smooth, easy to calculate paths that were curvature constrained. Which surprisingly, led us to railroads!
Because a train moving down track only turn at a certain max curvature, curvature only be achieved by slowly turning into it. Suddenly change from a straight path to a circular curve cause a lot of stress and discomfort on the vehicle. same for our characters as well. Big creatures unable instantly switch their curvature from 0 to max, Instead lead into and out of a turn. We use the same type of transition curves as train tracks, which are called clothoids
So a quick primer on colthoids, [CLICK] Known as Euler or Cornu sprirals. [CLICK] Main attribute its curvature increases over distance, the circle radius, which is the inverse of the curvature, decreases linearly over distance. Take piece starting at 0 curvature, and fit it along our path. [CLICK] By scaling the curve we can adjust the sharpness of it, define how quick a character can achieve its max curvature.
clothoid path following works like this: Every frame, we start by calculating the default turn circle based on a characters desired speed, maximum curvature and maximum sharpness. actually two circles, Blue circle maximum turn radius Green circle show where turn is started Yellow curve shows trajectory, starting at zero curvature enter green circle. Increase curvature till its same as inner circle, and matches the tangency. This default turn circle provides a solution for any given input and exit direction
next step selects the target to move towards, the next point in the path, or direction in case the character is avoiding. If avoiding we always turn at maximum sharpness, because want to turn quickly as possible in order to avoid other characters If target is a point however, we have to calculate the right sharpness
Because we don’t always need to go to our max curvature, clothoids transition curve itself already applying a rotation, can overshoot the target direction. If that's the case we lower the sharpness so that the total amount of rotation matches what is needed. entire turn pre-calculated, check to see if its safe to perform on the Navigation Mesh. Simply follow it.
And that is that whatever smoothing apply, all make trajectory deviate from originally planned point path. And explained more the trajectory deviates the more likely it is navigation safe space. In this example, character traversing path starting at bottom follow the path in smooth way till last segment, smoothing would fail. it would need to stop, turn, and then start again. This obviously does not look natural. A way to minimize not pass points path at all, and to “cut the corners” instead. Only problem is that when we plan our path we use a string pulling algorithm to convert the NavMesh polygon path into a point path character can actually follow. And a standard string pulling will hug the corners of the navigation safe space. Leaving no room for this corner cutting behavior.
So we therefore coupled the clothoid path following behavior with an adjustment to the string pulling algorithm that tries to add a certain offset from the navmesh wall [CLICK] this allows the character to cut the corners of its path, and follow the path segments almost exactly the rest of time. [CLICK] This algorithm is based on the simple path funnel algorithm [CLICK] Some of you might recognize this one. But it works like this.
We start with our planned polygon path. Starting at the start position of the character, we create a funnel towards the first points that form the portal to the next polyon.
We then look at each following portal points and shrink the funnel each time one of the portal points lie within that funnel
As soon one side of the funnel crosses the other one, we add the other funnel point as a point in our final point path and restart the process from there
Now our adjusted algorithm works the same right until we get here in the algorithm. Because when one side of the funnel crosses the other we don’t just add that point to the path. Instead, we first add an offset in the combined direction from the start of the funnel to the portal point and the next wall. Like so
The distance of the offset is an approximation based on the change of direction of the those vectors shown in blue, and the standerd turn circle of the character traversing the path. [CLICK] This offset is clamped against the opposite walls, [CLICK] so that the final point path will always be inside the originally planned poly path
And the rest is the same. The adjusted point with offset is added to the final point path. And the funnel restarts from there. The resulting point path does not hug the walls anymore, allowing the character traversing the path to cut the corners.
So, that covers the navigation portion of this talk, I’ve explained take the character and its natural movement into account when generating Navigation Mesh, as well as in our dynamic obstacle avoidance, and our path following system. [CLICK] So now it’s time to switch to execution part of natural movement.
And in order to support a more natural locomotion, AI needed to become more informed of what animations each characters has available, and what their motion looks like. Because our animators spend a lot of time making very expressive full body animations, without having any information about their motion, we would never be able to play them correctly. [CLICK] For the path following I just talked about for example, we need to know the turn curvature limits of our characters. And this might be different for a given stance, speed and maybe other variables. So for Horizon, we improved our locomotion sampling system, in order to support these runtime queries.
So the way our locomotion sampling works: we run the animation system during our offline content conversion process. We then activate and sample all relevant animation blend trees. [CLICK] we capture all relevant data Such as the rootbone transformations, animation duration and animation meta-date. [CLICK] This data stored in what we call the motion table. [CLICK] At runtime query that same table to get prediction character motion would look like for any given scenario. So lets look at an example of a blendtree, and how we sample it for the motion table.
Here we have an animation blendtree blends together different walk cycle animations. This example supports a blend of 2 speeds, 1.5 meters per second and 3.5 meters per second. And also supports 3 directions, left, forward and right. only have specific animations for those if the speed is 1.5 meters. These two blends controlled by 2 animation variables
Constructing the motion table works as following: a given animation blendtree. And then create a tree like data structure, where each layer represents an used animation variable, branch following layer is a singular value used by a blend node. in example, we have two layers, and a total of 4 animation variable combinations possible. perform the offline sampling for these 4 blend combinations, and store them in the table.
When can then query the motion table at runtime for any given combination of the supported animation variables. If one of the animation variable values between values assigned to the branches, we simply blend the result of the child nodes. Just like the animation system would do. This system is really useful for making our AI more informed of animations character has available, and what motion looks like Making behavior all the more realistic. The usefulness increases when you’re dealing with a large set of characters. Because it removes the need to keep track of the animation content yourself.
Another part of the path following that also uses the motion table is our stop prediction. Because stopping exactly at right point using only distinct animations can still be a difficult problem to solve. [CLICK] Most of time requires multiple animations with different length, blend to match the distance to the destination. Or you add or subtract some translation when use single stop animation. [CLICK] And that is something we do as well. Use the motion table figure out the best point to trigger stop, requires the least amount of adjustment to source animation. This is something we used to do in Killzone as well. Where it was very important that characters end exactly at their destination position, because doing that vs not doing it could also mean a difference between being in or out of cover. But in Horizon, we have a lot of satiations where we don’t really care about ending exactly at the end position. When moving in idle for instance. In those cases, Its better to adjust the destination to match the stop animation, instead of the other way around.
So, having a system such as the motion table allowed us to be more informed about what animations we have available, and wat their motion looks like. Its usefulness only increases when dealing with a large set of characters [CLICK] because the characters locomotion capabilities could be automatically be enabled or disabled depending on whether or not the right animations where available [CLICK] This gives control to the animators, and frees them to make the full body animations that they want.
One system where the motion table is also used extensively is the melee combat system. Because in the end, we had over 150 unique attacks, But we started out with the Killzone melee system. [CLICK] And because that game was a shooter, in the end we only had support for one melee attack, which looked like this [CLICK] And that was it! [CLICK] So going into horizon, we suddenly had to add support for a lot of attacks
First stage being the trigger for the attack. We use different preconditions to check if and which attack to perform. one we always use, checks the character would actually be able to hit the target. This works by placing a trigger volume around the position where the character would actually be able to do damage. [CLICK] The target has to be inside this volume, to trigger attack. But instead using the current position of the target. Use the motion table get exact time for the character to reach the point of impact in its attack animation. And use that time with the average velocity of target to extrapolate its position. If don't do this, constantly trigger attacks that would end up behind the target if its moving.
Next is the wind up part of the attack, The character telegraphs through animation/ui elements about to attack. During this time we allow small code driven rotations so that the characters aligns the damage position towards the predicted target position. [CLICK] The duration is defined in animation metadata, can easily be adjusted by animators themselves. And is again looked up using the motion table.
And then we attack! But still have one problem though: We have only aligned so that the damage position lies towards the predicted target position. But there can still be a lot of distance. So somehow bridge that gap. But most attacks consist out of one full body animation instead of blend with a variable distance, lot of extra work for 150 attacks. We therefore adjust the root bone trajectory while performing animation. We call this animation warping.
To warp an animation we have to analyze its trajectory in order to efficiently spread out the adjustments we want to make. [CLICK] This delta translation and rotation is then partially added frame by frame over either the entire animation duration, or only part of it.
How this warping system works exactly was explained last year by my colleague Paul van Grinsven. So if you want to find out more about that. Go into the GDC vault and look that up. Another good example of this technique was explained by Jake Campbell here at the AI summit, he called it delta correction, and is quite similar to what we implemented
So in short, Our melee combat system makes extensive use of the motion table in order to keep track of each attacks motion. [CLICK] This data is used in every stage of the attack [CLICK] This allowed us to minimize and spread out any adjustments we have to apply to the source animation in order to make it the characters hit the target. In the end, these adjustments where almost not visible anymore. Making all these free form attacks feel very grounden and natural.
I just mentioned the talk Jake Campbell gave last year. For those of you who did see it. Might remember part about focus tracking as alternative to a regular Head IK setup. We made something similar. We also had the problem that comes with using a traditional IK setup make a character look at something. [CLICK] It means that it will replace all joint rotation in the IK chain to make the head, point precisely towards the target position. This is a problem for our characters, whose ik chain are sometimes very long. And are also animated to be very alive [CLICK] They breathe in idle and play gestures. And an IK setup would remove all animation in their head which look really unnatural.
So we replaced our IK system instead use additive rotations to make the character look. [CLICK] This works by defining a reference orientation base of our rotation chain, and calculate the delta rotation between that and the orientation towards the target position, [CLICK] we then divide and add that rotation across a series of joints, on top is already animated at that time. [CLICK] Now this approach is not precise at all. Character not looking exactly towards target position, but for us not the goal. [CLICK] We want the character to look in a general direction while playing the base idle or gesture animation. Remember, we don’t always care about optimal solution, because want look natural. The end result looks like this
We can optionally make the system more precise by adding a slight adjustment. Instead of always taking the reference orientation of the head bone, we can blend that orientation with the current runtime orientation of the bone. [CLICK] The effect of this is as we use more of the current orientation of the bone. Rotation delta will act more as a counter rotation of the base animation, making look more like a typical IK setup. [CLICK] We actually implemented this as a blend, that is influenced by the distance of the target to the bone. [CLICK] So if the target is closer, we prefer to use the current bone orientation. Making it look more precise.
This dynamic blend between a optimal solution, and making it look natural. Is in line with main theme of this talk. That in order to make something look natural, might to let go a bit of that optimal, or precise solution. And I hope its something that you might also keep in the back of your head when design AI. Because we as developers sometimes get caught up trying to create that perfect solution that works in every scenario. Sometimes that’s needed! But sometimes, less than optimal is actually way better. Another big part of this talk centered making the AI more informed of the character it controls. into account what it looks like, and what it can and can not do. All of this makes for a greater sense of realism, and more enjoyment for the player when they engage with the characters that we create.
I obviously didn’t work on this by myself. So i want to finish by acknowledging the rest of the AI team. As well as the company itself for allowing me to be here today. And I also want to thank all of you for taking the time to come and listen to me here today! Please let me know if you have any questions
Beyond Killzone: Creating New AI Systems for Horizon Zero Dawn
Beyond 'Killzone': Creating New AI
Systems for 'Horizon Zero Dawn'
How do we make these characters move naturally?
Picking Avoidance Direction
• Generate safe target
• Pick one target velocity
according to a scoring
• Desired avoidance
velocity used as input
in path smoothing
Notes on avoidance
• Looked good for robots, not humans
• Hard to find an optimal solution
– Optimal does not always look natural
• Looking into more natural avoidance solutions
– Using machine learning to imitate real life avoidance
– Embrace failure when not easy to avoid.
• More in depth in upcoming Game AI Pro 4:
– Obstacle avoidance for robots of multiple sizes and
forms in Horizon Zero Dawn by Carles Ros Martinez
• Tangent placement was based on:
– Path polyline
– Entity velocity or avoidance direction
– Optional modifier that changes the length of the first tangent
• Can make turns sharper or more shallow based on the NPC’s size
• Tangents and original path segment created a hull which was checked on the
Notes on Bezier Smoothing
• Smoothed path deviated a lot from the original straightened path.
Notes on Bezier Smoothing
• Difficult to take characters max curvature (turning radius) into account.
Notes on Bezier Smoothing
• Constraining the tangent length to the NavMesh means no curvature continuity.
New path following
• Disclaimer, this didn’t make it in Horizon Zero Dawn!
• Also known as Euler or Cornu
• Curvature increases with its length
• Has support for number of
– Maximum curvature
– Curvature rate of change as parameter
• Create the default turn circle
based on characters desired
speed, maximum curvature
• Next target to move towards:
• Point in path
• Avoidance direction
• Sharpness is calculated
• Avoidance is always done at maximum
• Follow the pre-calculated turn circle Max curvature
• Still deviate a lot from planned point
Adjusted String Pulling Algorithm
• Provides clearance of the NavMesh
• Allows the character to cut off the
corners, follow rest of polyline more
• Based on simple funnel algorithm:
• Create a funnel from starting
position to portal points
• Check next 2 points, Set funnel
side to each point if inside funnel
• Starts out the same way, until one
side crosses the other
• Distance is approximated based on
the default turn circle and wall angle
• Offset is bounded by opposite wall
• Navigation & Planning
– Navigation Mesh
– NPC Avoidance
– Path Following
– Melee Combat
– Look at system
• AI needs information about it
• Required to play full body animation
• Sample relevant animation
– Rootbone transformations
– Animation duration
– Animation meta-data (triggers
• Stored in Motion Table
Step 3: Attack!
• End up with target position
• But how do we get there?
– Most attacks only have one full body animation
• Adjust rootbone transformation of animation
• Added over entire animation, or only part of it
• Paul van Grinsven @ GDC17: Player Traversal
Mechanics In The Vast World Of Horizon: Zero Dawn
• Jake Campbell @ GDC17: Bringing Hell to Life: AI and
Full Body Animation in DOOM
• Motion Table informs AI of attack
• Used in every stage of attack
• Minimizes runtime adjustments
• IK replaces all joint rotation in IK
• Makes characters look unnatural
• Replaced IK System
• Use additive rotation
• Define reference orientation
• Calculate and spread out delta
over desired bones
• Note: Not Precise!
• Variable preference between reference orientation and
• Preferring current rotation will negate any rotation applied
by source animation, making it more precise
• This is used to make the look at more precise when the
target is close.
• Optimal vs Natural
• Focus on the character