Because if you are doing it right, just building and testing will require a dozen or computers.
As I get used to controlling a handful of computers, I started thinking what more we can do.
If you don’t think more computers are helpful, you are doing it wrong / Can’t be said about people.
Don’t build up capacity that’s enough on a few days a year but go idle most other time.
One of the reasons I needed so many computers is because I needed all the different environments / some combinations were very rare and old, keeping them pristine was hard.
Needing to have diversity in the environment adds to the capacity planning problem.
But you don’t want to make everything too slow by over-subscribing. I’ve seen hypervisors used to run many virtual machines.
Hey Kohsuke, my builds are failing. Can you take a look?
Hey Kohsuke, my builds are failing. Can you take a look?
So the lesson and the best practice = isolate builds and tests / treat them like untrusted code
Various techniques has been deployed successfully today
but as I found out the hard way, this isn’t enough to solve this problem
Hey Kohsuke, my builds are failing. Can you take a look?
Hey Kohsuke, my builds are failing. Can you take a look?
Turns out isolation in the time dimension is just as important / somewhat like a human body --- if you live long enough, things tend to break down / beyond certain point it becomes unsalvageable, as Windows users know all too well!
Turns out elasticity solves this problem, too, by allowing you to simply throw away and create new instances in the same predictable state /
Episode from scalability summit / everyone explains their monitoring system
Either this slide or more details Jenkins.
If you are willing to invest on creating a great slave virtualization environment, you can.
HS: if somebody misses the CoW concept, he’d be lost for the next two slides
brand-new job type / based on feedback from many / scalability summit