9. Healthcheck Script
Every
30 min
Disappearing
instance?
Launch new
instance
Is the C* ring
healthy?
Are all instances
healthy?
Can we fix
automatically?
Replace bad
instance
First failure?
Sleep for X
minutes and
retry
First failure?
Is there an
offline
maintenance?
11. How Did The Healthcheck Script Handled It
Every
30 min
Disappearing
instance?
Launch new
instance
Is the C* ring
healthy?
Are all instances
healthy?
Can we fix
automatically?
Replace bad
instance
First failure?
Sleep for X
minutes and
retry
First failure?
Is there an
offline
maintenance?
14. Engineer
Wakes up
Logs in
and ACK
Checks
runbook
Studies
the alert
Fixes the
problem
Runs
diagnostics
PagerDuty
Alert
2:02 AM 2:07 AM 2:15 AM2:10 AM 2:30 AM2:20 AM2:00 AM
On-call, Without Automation
17. Failure / Alert Automation
Automation using Building Blocks
Integrations with Netflix Ecosystem
Platform as a Service
Event-driven Automation Platform
23. Engineer
Wakes up
Logs in
and ACK
Checks
runbook
Studies
the alert
Fixes the
problem
Runs
diagnostics
PagerDuty
Alert
2:02 AM 2:07 AM 2:15 AM2:10 AM 2:30 AM2:20 AM2:00 AM
On-call, Without Automation
29. ● Product
○ Reduced MTTR (Mean Time To Recover)
○ Safety - Reduce risk of human errors
○ Capture operational knowledge as code
● People
○ Reduced pager fatigue for developers
○ Increase in productivity
○ Morale
Impact
30. Stackstorm Docs - http://docs.stackstorm.com/
Stackstorm Slack Channel - https://stackstorm-community.slack.com/
Netflix OpenSource: https://netflix.github.io/
Check out our https://jobs.netflix.com page for current openings
We focus on providing a common automation platform for Netflix Teams.
Who runs a service on AWS?
Amazon EC2 is hosted in multiple locations world-wide. These locations are composed of regions and Availability Zones. Each region is a separate geographic area. Each region has multiple, isolated locations known as Availability Zones.
Why Re:Boot? Xen security issue. Reboot a lot of instances in all the Availability Zones.
Why is it a big deal? For stateless services, it’s not. But for Stateful services it is. C* for example.
Missing the 50M Party in L.A.
Denial: That can’t be true …
Anger: Yep, it’s confirmed
Bargaining: Tried convincing AWS to delay
Depression: They said no. Risk is too high.
Acceptance: What now …
Actually it’s easy to accept, because of the Simian Army.
-
Anyone heard about the simian army?
The Simian Army is a suite of tools for keeping your cloud operating in top form.
Janitor Monkey, Security Monkey, Coffee Monkey
Chaos Monkey, the first member, is a resiliency tool that helps ensure that your applications can tolerate random instance failures
Netflix EMBRACE chaos. We love it so much that we generate it. In PROD.
We run it on most of Netflix services, and even on C*
CDE has Chaos Monkey enabled on our C* clusters
Maximum 1 node per day, during business hours
Cassandra Team Health Check system detects the missing instance and replaces it
Going back to our stages of grief, this made acceptance easier.
We test for this
Our automation can take it.
What our stack looked like at the time?
Bunch of Python/Shell scripts
Jenkins as job scheduler (HC, node-replacements, repairs, upgrades and etc)
On C* nodes: C* + Priam
tAtlas is already a very powerful metrics and alerting tool, and our metric systems add non-C* related metrics (App metrics for example) that help in correlation.
- Atlas is a very powerful time series metrics and alerting tool
- Atlas is Open Source
Simplified view of Healthcheck flow
Assisted Diagnostics
Auto-Remediation
Auto-Remediation supported:
Disappearing instances
Replace instance with bad I/O
How did the healthcheck behave during Re:boot
2 behaviour:
Instance rebooting:
False positive (transient issue)
Instance rebooting, but failing AWS healthcheck and being terminated:
Auto-remediation
218 C* nodes rebooted
22 nodes didn’t start and were automatically terminated by AWS internal healthcheck
Our heathcheck identified the missing nodes and automatically remediated the issue
0 downtime
L.A. Party was awesome
Take the learnings from CDE, abstract it and see how to apply to other teams
Increase in scope - How can we maximize impact?
So our main focus was to apply the learnings:
False Positive, Assisted Diagnostics, Auto-Remediation
Help on-call engineers sleep at night (improve on-call automation)
Why is it such a big deal?
First, you need to understand the DevOps Model at Netflix
Everyone is on-call at Netflix, every team manage their own service
This means a lot of on-call engineer doing on-call operations.
So what does it look like to be On-Call when there is no or limited automation?
On-call before winston. Long MTTR (Mean Time to Recover).
Operational knowledge in document - hard to maintain
Risk of human errors
Pager fatigue - Morale
High MTTR (Mean Time To Recover)
Impact on productivity
JS --
Hand it over to Sayli who will cover how the new system help alleviate these pain points …
Sayli --
To reduce the pain points that we were facing, we started thinking of new approach.
Quality of a good engineer - learn not only from failures but from success as well like our AWS reboot success story.
We survived during reboot using a system which automatically diagnosed and fixed a known failure scenario.
The problem was that it wasn’t designed to extend for other failure scenarios or could be used by other teams.
Proactive automation.
Idea of reactive automatic troubleshooting and remediation will be highly useful especially in operations.
With this expanded charter for our team, we focused on what will be the key features for a system that will solve these problems for us and the answer was event-driven automation.
2. Instead of autonomous systems, ability to share building blocks within a single service or even to multiple services
3. eg. sophisticates telemetry system Atlas and CI platform (spinnaker), jenkins
4. Last but not least, Service owners can focus on the automation and not platform - Make it self-serve
Problem space not unique to Netflix
We started working on initial design of our own in-house , internal POC
Looked at Facebook (FBAR) / LinkedIn (Nurse) / DropBox (Naoru). This helped us see how they approached the problem
Also came across this meetup group .. 400 auto remediators
Now that we knew WHAT are requirements, we worked on figuring out HOW.
Evaluated building platform from scratch, adopting an existing solution or mix and match - using some existing components and building some.
After POC, stackstorm.
Stackstorm platform for integration and automation across services and tools
The usecases that stackstorm was targeting -- facilitated troubleshooting and auto remediation fitted right into what we are looking for.open source. Quality of the code. Great to collaborate and code with.
Great discussions with respect to their usecases, approach and adoption challenges. Helped us validate benefits.
Do our own or adopt existing solution?
We started with our own POC, then we decided to go with Stackstorm- event-driven automation platform
Facilitated Troubleshooting/Event handling
Automated remediation
Stackstorm gave us and event driven automation platform and building blocks ..what about integration with netflix ecosystem?
Pulp fiction fans in the audience?
On-call before winston. Long MTTR (Mean Time to Recover).
And now with Winston.
Winston gets the Alert. Using its rule engine decide what the right action is. Action then analyse the issue and if it’s identified as a False Positive, no need to Page the on-call.
Another use case is that Winston will identify that it can fix the issue. When it does, again, no need to Page the on-call.
Last use case, the one we want you to focus on is Assisted Diagnostics. While the on-call is being Paged, Winston runs a series of pre-defined diagnostics and prepare a report for the On-call so that when he logs in the system, he has comprehensive information like the Discovery status, list of recent exceptions or error, or any other relevant context to help him make a decision faster.
Let’s look at some of the real-life scenarios
Anybody who doesn’t know what a runbook is?
a 'runbook' is a routine compilation of procedures and steps that a sys-admin or a person on-call goes through to diagnose and remediate a failure. Generally runbooks have 3 broadly classified steps --
Real-life scenarios
Remove False Positive - expected scenario, can safely be ignored
Diagnostics - collect troubleshooting information
Remediation - fix the problem
Now let’s see some examples of how winston can assist in these steps ...
First example is for False Positive: Data Pipeline Team, Broker Offline. But instance was terminated by AWS, so it’s expected that the broker is offline. Issue resolved. No need to Page on-call for that.
Another Assisted Diagnostics example for Cassandra: Disk Space issue
Gives context around the size of the actual C* data
Checks if there is any Repair or Compaction running which temporarily increases disk usage
Try some auto-remediation: Clean-up old snapshots
Still above disk usage threshold, Paging On-call
In this case, on-call doesn’t have to try to cleanup snapshot since it was already done by Winston, and can now focus on other unknown root causes. Faster TTR.
Last example: Auto-Remediation. For Data Pipeline team: Broker is offline again, like in the first example, but this time, the EC2 instance is still running. So it’s not a false positive.
Check if there is any disk failure
If not, tries to restart the Kafka broker.
Succeeded. Broker is back online. Resolved. Not paging on-call.
Add resources for stackstorm, slack channel and happy hour
Stackstorm guys are here