More Related Content Similar to Supercell – Scaling Mobile Games (GAM301) - AWS re:Invent 2018 (20) More from Amazon Web Services (20) Supercell – Scaling Mobile Games (GAM301) - AWS re:Invent 20182. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Supercell – Scaling Mobile Games
Heikki Verta
Services Team Lead
Supercell
G A M 3 0 1
3. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Outline
Supercell team structure and culture
Scaling games
Scaling analytics
4. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Supercell intro
5. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
6. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Brawl Stars joining the roster soon
7. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Challenge
5 games
Hundreds of Millions active
users
4M Peak concurrents
6000 EC2 instances
300 Databases
Multiple regions
250 people company
20 people in a game team
3 server developers in a game
team
8. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Supercell team structure and culture
10. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Supercell culture in a nutshell
Small teams
Bottom up - not top down
Independence and responsibility Agile
11. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Supercell culture implications for AWS architecture
Teams own their AWS account(s)
No separate ops-team
Teams choose their own tech stack
We use AWS managed services to reduce ops burden
12. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scaling games
13. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
High-level game stack
Traditional client-server architecture
Server is implemented in Java
Databases are running MySQL
14. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scale out instead of up
Up
Out
15. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Enabling scaling out of games
Microservice architecture
Sharding database layer
16. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Microservice architecture
Game is split into services
Services run on different
instances
“Microservice light”
Single artefact
One repo
One language
One team
17. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scaling out
Amazon EC2 Auto Scaling handle scaling instances
Zookeeper assigns roles to Amazon Elastic Compute Cloud (Amazon EC2)
instances
18. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scaling out database layer
Database layer is split into shards
To scale out new shards are added
manually
Shards don’t affect game play
19. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Database failure recovery
MySQL master node failure breaks
the game for that shard
Databases failures are still handled
manually
Brawl Stars uses Amazon Aurora to
mitigate this
20. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scaling analytics
21. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Data culture at Supercell
Analytics can’t make a hit game - but can improve it
Full transparency wrt data inside the company
Data scientist embedded in teams
22. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Analytics in numbers
~5 TB of data per day
~15B atomic “rows” or events
Total size of data warehouse ~4PB
Sample event
{"type": "level_changed", "account": 2474, "sessionId":
"AAABYr437O0KBSX9AAACwA==", "levelType": "experience", "level": 1,
"timestamp": 1523609760859, "game": "clash-royale"}
23. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Analytics timeline
2012 2018
Game databases
Events Streaming events
Vertica data warehouse Amazon Simple Storage Service (Amazon S3)
data warehouse
24. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Analytics in the beginning
25. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Data pipeline
26. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Event pipeline, 2012
27. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Event pipeline, 2013
28. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Pro’s and con’s of event pipeline
+ Simple
+ More details then just DB changes
- No realtime access
- Data loss if local disk lost or full
- Only way to consume data is from Amazon S3
29. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Streaming pipeline, late 2013
30. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Streaming pipeline, late 2013
31. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Streaming pipeline
32. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Benefits of streaming pipeline
Data is safe from local failures
Realtime access to data
Multiple ways to consume data
33. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Supercell Amazon Kinesis setup
2 main streams
Client events
Server events
Data is partitioned randomly
We lose ordering
Gain uniform load between shards
Clients use KCL to consume streams
34. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Amazon Kinesis dispatching
Challenge:
Main streams are quite large
~200 shards per stream
~100MB/s of data
Streams contain multiple event types
All clients are not interested in all event types
Solution:
Split main streams into applications specific
streams
Application specific streams contain only a
subset of events
35. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Analytics timeline
2012 2018
Game databases
Events Streaming events
Vertica data warehouse Amazon S3 data warehouse
36. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Data warehouse
37. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
ETL and data warehouse in 2013
38. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Challenges
Spiky load on cluster
Querying is slowed down during ETL
Scaling up or down takes significant
effort
Storage and compute are tied
together
Even a large columnar database
cluster has its limits
39. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
The goal
Limit the amount of data in Vertica
Separate compute from storage
Separate ETL processing from querying
Maintain single source of truth for data
Utilise the flexibility of the cloud to optimise resource usage
40. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
The plan
Amazon S3 as the single source of truth
Data stored as parquet
Amazon EMR for ETL
Vertica only for results (accounts, aggregates and KPI’s)
41. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
ETL and data warehouse now
42. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
ETL and data warehouse now
43. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
ETL and data warehouse now
44. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
ETL and data warehouse now
45. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Benefits of current approach
Separation of compute and storage
Amazon EMR scales out to very large data sets
Dedicated and transient clusters for ETL workloads
Familiar environment to data scientists
46. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Analytics timeline
2012 2018
Game databases
Events Streaming events
Vertica data warehouse Amazon S3 data warehouse
47. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Lessons learned
48. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Scaling and failure recovery
Scaling is determined by your architecture
Microservice architecture and DB sharding can get you far
Assume that things fail - and take that into account
49. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Analytics
Separate compute and storage
Focus on the fundamentals
Think about how to define schema
No “data police”
50. © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Culture
The best thing about Supercell are the independent teams
The most challenging thing about Supercell are the independent teams
The benefits far outweigh the costs
51. Thank you!
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Heikki Verta
heikki.verta@supercell.com
52. Please complete the session
survey in the mobile app.
!
© 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved.