Gen AI in Business - Global Trends Report 2024.pdf
Beyond Continuous Delivery - Jenkins User Conference - 23 Oct 2014
1. Jenkins User Conference San Francisco #jenkinsconf
Beyond Continuous
Delivery
Chris Hilton
Gap, Inc.
www.gapinc.com
October 23, 2014
#jenkinsconf
2. Jenkins User Conference San Francisco #jenkinsconf
Continuous Delivery
• Frequent, automated releases
• Every check-in is a potential
release
• Every change triggers feedback
• Feedback must be received as
soon as possible
• Automate almost everything
• Build quality in
3. Jenkins User Conference San Francisco #jenkinsconf
Assumptions
• Trunk-based development
• Continuous integration/delivery/deployment
• Cloud infrastructure
– Cheap
– Unlimited
4. Jenkins User Conference San Francisco #jenkinsconf
Modular Development and
Dependency Management
App WAR
A JAR
B JAR
Common JAR
5.
6. Jenkins User Conference San Francisco #jenkinsconf
Dependency Management
and Modular Development
App WAR
A JAR
B JAR
Common JAR
7. Jenkins User Conference San Francisco #jenkinsconf
Env Scripts App WAR
Base VM
Isolation Tests
IT Scripts
A JAR
B JAR
Common JAR
Infrastructure as Code
Application Integration Infrastructure
11. Jenkins User Conference San Francisco #jenkinsconf
Integration Tests
Isolation Tests Isolation Tests
Other App Env Scripts
Env Scripts App WAR
IT Scripts
Base VM
A JAR
B JAR
Common JAR
Integrated Pipelines
Application Integration Infrastructure
13. Jenkins User Conference San Francisco #jenkinsconf
Production
Staging
Integration Tests
Isolation Tests Isolation Tests
Env Scripts App WAR
IT Scripts
B JAR
Application Integration Infrastructure
Other App Env Scripts
Base VM
A JAR
Common JAR
Pipeline Segments
14. Jenkins User Conference San Francisco #jenkinsconf
Cloneable Pipelines
Production
Staging
Integration Tests
Isolation Tests Isolation Tests
Env Scripts App WAR
B JAR
Application Integration Infrastructure
Other App Env Scripts
IT Scripts
Base VM
3.6
A JAR
2.3
Common JAR
4.3
1.4
2.3 : 2.0+ 1.4 : 1.0+
4.3 : 4.0+ 4.3 : 4.0+
Staging
Integration Tests
Isolation Tests
App WAR
A JAR B JAR
Common JAR
15. Jenkins User Conference San Francisco #jenkinsconf
Personal Pipelines
Production
Staging
Integration Tests
Isolation Tests Isolation Tests
Other App Env Scripts
Env Scripts App WAR
IT Scripts
Base VM
3.6
A JAR
2.3
B JAR
Common JAR
4.3
1.4
2.3 : 2.0+ 1.4 : 1.0+
4.3 : 4.0+ 4.3 : 4.0+
Staging
Integration Tests
Isolation Tests
App WAR
A JAR B JAR
Common JAR
Application Integration Infrastructure
16. Jenkins User Conference San Francisco #jenkinsconf
Pre-Flight Pipelines
Production
Staging
Integration Tests
Isolation Tests Isolation Tests
Other App Env Scripts
Env Scripts App WAR
IT Scripts
Base VM
3.6
A JAR
2.3
B JAR
Common JAR
4.3
1.4
2.3 : 2.0+ 1.4 : 1.0+
4.3 : 4.0+ 4.3 : 4.0+
Staging
Integration Tests
Isolation Tests
App WAR
A JAR B JAR
Common JAR
Application Integration Infrastructure
21. Jenkins User Conference San Francisco #jenkinsconf
Quantum Pipelines
n
n + 1
n + 2 n + 1 + 2
n + 2
22. Jenkins User Conference San Francisco #jenkinsconf
Quantum Pipelines
n
n + 1
n + 2 n + 1 + 2
n + 2 -
23. Jenkins User Conference San Francisco #jenkinsconf
Quantum Pipelines
n
n + 1
n + 2 n + 1 + 2
n + 2
n + 3 n + 1 + 2 + 3
n + 2 + 3
n + 1 + 3
n + 3
-
-
-
24. Jenkins User Conference San Francisco #jenkinsconf
Evergreen Trunks
n
n + 1
n + 2 n + 1 + 2
n + 2
n + 3 n + 1 + 2 + 3
n + 2 + 3
n + 1 + 3
n + 3
-
-
-
-
25. Jenkins User Conference San Francisco #jenkinsconf
Extreme Integration
trunk
extreme
workspace
26. Jenkins User Conference San Francisco #jenkinsconf
Extreme Integration
trunk
extreme
workspace
27. Jenkins User Conference San Francisco #jenkinsconf
Extreme Integration
trunk
extreme
workspace
28. Jenkins User Conference San Francisco #jenkinsconf
Thank You To Our Sponsors
Platinum Gold
Silver Corporate
29. Jenkins User Conference San Francisco #jenkinsconf
Beyond Continuous
Delivery
Chris Hilton
Gap, Inc.
www.gapinc.com
October 23, 2014
#jenkinsconf
Editor's Notes
Hello, everyone. My name is Chris Hilton, I’m a Continuous Delivery architect at Gap, and I am obsessed with Continuous Delivery, possibly to the point of madness. I’ll let you be the judge. I’ve been a build and release engineer since 2000 at various enterprise software companies and consulting gigs and I have spent lots and lots of time thinking about how to deliver quality software as fast as possible. All of that experience has led me to explore what I think are some pretty nifty, others might say extreme, ideas that build off of the current state of the art in Continuous Delivery and I want to share some of that pathology, I mean those ideas with you today. I’ll be starting with ideas that I’ve actually implemented and worked on over the last few years and then I will segue into some of my more aspirational/crazier ideas for the future.
To start off, I’m guessing you all have some knowledge of the continuous delivery concept and its ideas as it exists today since you’ve chosen to attend a presentation entitled Beyond Continuous Delivery. As a longtime build engineer myself, I’m more focused on the build and release aspects of continuous delivery than, say, the development or business process aspects. So my job generally focuses on building pipelines similar to that shown for different pieces of software, in my case mostly rather large and/or complicated enterprise software projects.
<describe pipeline>
And when implementing a continuous delivery system, these are some of the important points of continuous delivery I try to address with a pipeline.
<read points>
The basic idea being to make sure that your software is always production-ready and getting feedback about changes to the system, whether it’s positive or negative, as fast as possible. This process is pretty simple for small, independently releasable applications, but it can get much more complicated when dealing with large, complex, inter-related systems, as we will soon see.
These are concepts I assume everyone here is pretty familiar with and serve as a foundation for later discussion.
Trunk-based development is a generally accepted practice for continuous delivery and simplifies our discussion of pipelines. You can do all of these ideas with some form of branch-based development, but it adds another layer of complexity that I’m going to avoid for this discussion and I don’t recommend it anyway. Trunk-based development FTW.
I also assume you know the basics of continuous integration, delivery, and deployment. For this discussion, that means every change starts a feedback process that encompasses the entire pipeline.
Also for the purposes of this discussion, I’ll be assuming that the pipeline uses cloud resources that are cheap and unlimited. Obviously, that isn’t entirely true and I’m going to be pushing this particular assumption to some ridiculous limits, you’ll soon see, but I’m going to go with it for the sake of this discussion.
So this first idea shouldn’t be particularly controversial…
It’s simply using modular development to break up larger builds into smaller build units and then using dependency management to define a pipeline based on the dependency relationships between those modules. Here is an example based on a java web application where the build has been broken up into four different modules. The four different modules have dependencies between them so if a change is made to one module, a new version is built and then the modules that depend on it are rebuilt and tested with that new version of the module. So in this example, if a check-in is made to the A JAR module, it gets rebuilt and then the App WAR module is rebuilt and tested with the new version of A JAR. If the Common JAR gets rebuilt, all of the modules get rebuilt and tested to make sure the application is still working.
This example is pretty simple. You might be thinking, why bother? A more realistic example can look like this…
This is one of the moderately complex applications that I’ve worked with. There are more complicated ones. To keep it simple, though, I’ll stick to talking about...
…this simple example instead.
So what can you do with dependency management and modular applications that you can’t do with a monolithic build?
Shared modules- another application could depend on B JAR
Chained builds- each module kicks off build of downstream modules
Minimal builds, impact zone- build only what’s been updated
Parallel building- instead of monolithic sequential build, build modules in parallel
Great for optimizing software builds, but software doesn’t run by itself. It runs on hardware. So what about when the hardware changes?
Now with infrastructure as code, we can add infrastructure modules and include those in our dependency tree, ideally with tests just like the software modules. These infrastructure modules can also build and test independently when changes are made to them.
So here we have a base VM module that defines the basic virtual machine our application will be running on, on top of that we have a dependent module that adds some basic IT setup (monitoring, security, etc.), and then another module adds some environment scripts. I’m keeping the specific infrastructure-as-code tool vague here in order to avoid any holy wars. I don’t want to start any riots. So if a change is made to one these modules, the change is tested and then rippled up through the pipeline to make sure the system is still production-ready. In this case, the isolation tests module here deploys the latest code onto the specified infrastructure and runs tests on the isolated application like functional tests, acceptance tests, etc.
Before the dependencies get too complicated, I want to talk about semi-fluid dependencies and how they can keep every module up-to-date with respect to its dependencies but also make sure they are always in a working state.
Semi-fluid dependencies are a combination of static and fluid dependencies.
Static dependencies are where you depend on a specific version of a module. This is signified on the diagram by the explicit numbering scheme to the left of the colon, such as App WAR depending on version 2.1 of A JAR. This method is very stable because you are always dealing with the same version of your dependency, but it’s hard to keep up-to-date manually, especially if the dependencies are always changing.
In contrast, fluid dependencies are where you depend on a version range of a module, such as always depending on the latest version which is what you generally do for internal dependencies in a pipeline. This is signified on the diagram by the non-specific numbering scheme to the right of the colon, such as App WAR depending on the 2.0 or later version of A JAR. This method keeps the dependencies up-to-date easily, but it can also break things easily when external dependency changes often break your code.
At one large company I worked at, fluid dependencies were causing development outages for the web team several times a week. There were about 100 libraries that were being depended on from various other projects in the organization that needed to be kept up-to-date, but this also caused a couple of hours of downtime every week. With a team size of around 150 developers, that was $25-50K in lost productivity every week.
So with semi-fluid dependencies, I try to get the best of both worlds. Every dependency has both a static and (possibly) fluid dependency. Developers use the static dependencies when working locally and an automated system uses the fluid dependencies, runs tests, and updates the static dependency to the new last known good version.
Looking at our diagram, when the new version 4.2 of Common JAR is published…
The automated system builds A JAR with the new version of Common JAR and it passes, so the static dependency is updated to 4.2. If you’re a developer on A JAR and synchronize from the repository, you will now build with the 4.2 version of Common JAR and you are good to continue developing.
At the same time, the build for B JAR also runs, but it doesn’t pass, so the static dependency remains at 4.1 for Common JAR. If you’re a developer on B JAR, you can continue to work with the old version of Common JAR without a problem.
Depending on your build system, App WAR may also build to attempt to incorporate the new version of A JAR, but it will fail with a dependency version conflict for Common JAR, so the old dependency versions remain. Developers on App WAR can also continue to work without problem using the old, last known good versions of the dependencies.
At this point, all of the developers can still build and work on every module because they each have static dependencies on known good versions, so the damage to the pipeline has been contained and developers can stay productive. Most times, like 95% of the time at this job, new dependencies were updated automatically, but the rest have to be resolved manually. In this case, any further work to Common JAR or A JAR won’t be incorporated into the App WAR until this problem is resolved. This could be resolved two ways. Either update B JAR to work with the new version of Common JAR or publish a new version of Common JAR that works with B JAR.
In this case, we published a new 4.3 version of Common JAR with a fix. The A JAR build passes and updates its static dependency, B JAR does the same, and App WAR updates the static version for both of its JAR dependencies.
So I implemented something like this at that previous job I mentioned and it vastly cut down on their build problems from external dependencies, saving them a couple of million dollars in lost productivity per year.
Now there’s still some potential confusion on what’s happening in the pipeline and some back and forth that has to go on to fix errors in this situation, so I’ll talk more about eliminating the introduction of errors later. For now, let’s build on the pipeline example I’ve been laying out…
And talk about integrating our application with another application. Here I’ve added another application and have an integration tests module that deploys the two applications and tests them together. Here you can see that our other application is a bit more complex and can take advantage of more of those modular development optimizations we talked about. And when any of these modules change, we can do an impact-based build that rebuilds or reruns just the dependent modules above it. This will be important later. But for now I want to focus in on integration testing.
In a lot of pipelines, you might see applications that go through an integration testing phase where the application is tested with some version of another application, but it’s not necessarily the same version of that application that this application will run against in production. This can lead to errors where something works in the pipeline, but then doesn’t work in production because of different application versions. To combat that, I use an idea called…
Fusion testing. The basic idea is that once a specific version of one application is integration tested with a specific version of another application, those particular versions are now tied together or fused and will be promoted together through the rest of the pipeline. Here, the 3.4 version of the application on the left is tested with the 4.7 version of the application on the right in the integration tests module. Then the two versions of those applications are promoted together to the staging segment, where they are further tested together, and finally to the production segment to be deployed together to production. Since both of these applications are always production-ready, it’s no problem to release both of them into production together when either application is ready to be released.
Now we have a dependency tree that extends all the way to production and can be triggered by any change anywhere in the system. If any module that influences our production system changes, that module is rebuilt and tested, along with all of its dependent modules, until that change finally ends up in production. Ideally we are releasing frequently, perhaps with even as few changes as only one change, so even though the dependency graph may look big and complicated, only small amounts of change should be finding their way into production at any particular time.
I’ll just mention at this point that I tend to call these modules “segments”, as in pipeline segments. Modules tend to be a more software-specific term, so I refer to segments as any step on the path to production, be it software, infrastructure, or integration-related.
So now I’m going to add cloud infrastructure into the mix and propose that now that we have all of this infrastructure as code defined in the pipeline, we can do something radical like…
Clone parts, or even all, of the pipeline onto an identical set of virtual hardware, but isolated from the main pipeline. With all of the pipeline infrastructure well-defined in source code, we can create a copy of part or all of the release pipeline from the point of view of any particular segment. Dependencies on segments outside of this cloned pipeline will just use the last known good version. Here, there’s a copy of the pipeline from the Common JAR segment all the way to the Staging segment. Using this idea, we could create something like…
Personal developer pipelines. With this cloned pipeline, we could run a separate, but identical pipeline that tests a developers changes on identical hardware as the main pipeline. Has anyone here ever heard a developer say “it works on my machine”? Yeah, well, now talk to me when it works on your pipeline with identical hardware and setup as the release pipeline.
Another benefit is that the code will be better tested, almost certainly better than the developer could reasonably do locally. Integration tests are notoriously unreliable or unlikely to be run by a developer locally. Unlike before, when Common JAR breaks the build for B JAR, this time it happens in a separate pipeline instance and the developer knows there is a problem before he even commits his code.
You can even use a cloned pipeline as a requirement for committing code using…
A pre-flight pipeline. With most pre-flight build concepts, you run just the individual module build and tests before the change is accepted for commit. Using a cloneable pipeline, we can actually reject changes that would cause problems much deeper into the pipeline. This should greatly cut down on the number of errors introduced into the pipeline.
This concept is something that has been in demand at 4 of the last 5 places I’ve worked, but I’m just now finally getting close to being able to implement it. The rest of this presentation gets more and more speculative about what I would like to do to make the release process faster and better. And speaking of speculative…
One of the problems we have with pipelines for complex systems is that it can take quite some time to do many phases of testing before determining a set of artifacts are production-ready. For the portion of the pipeline shown, no new artifacts are actually being created after the App WAR. Each of the following segments is just doing successive stages of testing. Instead of running the following three test segments sequentially, we can kick each of them off in parallel on their own set of cloud resources with the caveat that each segment only passes if its tests pass AND all of the previous logical segments pass. So each of the dependent segments gets a jump start on its testing without having to wait for the previous segments to finish first. If a previous segment does fail, the dependent logical segments can be aborted and thrown away since the artifacts they are testing are not actually eligible to be promoted that far. In the hopefully common case of tests passing, you’ve saved a lot of time by running the segments in parallel. In the uncommon case where tests fail, all you’ve done is waste some cloud resources.
So we’re able to give feedback sooner with this technique, but I’m still a little obsessed with preventing errors from getting in the pipeline in the first place.
Going to back to our pre-flight pipelines, we’re getting closer to a pipeline that can never be “red”.
Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.
There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet.
Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.
There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet.
Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.
There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet.
Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.
There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
If we go back to our pre-flight pipelines, we’re getting better at catching errors before they can turn our pipeline “red”, but there are certain situations it can’t catch.
In this diagram, Devops A and B represent different engineers committing code to the system via their pre-flight pipelines. The two pre-flight pipelines each ran successfully on their own, but when their changes are merged in the trunk, some conflict of their operation causes the main pipeline to fail.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet.
Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.
There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
We’re getting closer to a pipeline that can never be “red”. I don’t really like the name “pristine trunks”, but I haven’t come up with better wording yet.
Not a great diagram, but the black line is trunk and the brown and blue lines are pre-flight pipelines for different developers. The first brown pipeline runs and successfully commits. The blue pipeline runs with a later change and also successfully commits. While that pipeline was running, another brown pipeline runs and fails, so that change is not committed. Another blue pipeline also runs and commits. And the brown pipeline is corrected and eventually commits.
There’s still some opportunity for redness here as testing is running while new commits are being made (a common problem fro developers). We need a way for ordered entry of changes.
And there are even more complicated situations that can arise. Multi-commit interactions can be particularly vexing problems to sort out.
So there’s still some opportunity for redness here as testing is running while new commits are being made. This is a common problem for developers even now as they do the check-in dance of trying to test their changes against the latest code from trunk and that can be further exasperated by the possibly longer testing cycle of a pre-flight pipeline. We need a way for changes to be entered and tested in an orderly fashion.
Enter a concept I call quantum pipelines.
Here is a trunk pipeline is already at change n and we know it’s green. Then change n+1 comes in and a pre-flight pipeline kicks off with that change.
What happens when the next change comes in? We could wait for this pipeline to complete and accept or reject change 1 before we start testing change 2, but this won’t scale with many changes coming in and/or a long-running pipeline.
With our cloneable pipelines concept, we can do better. Instead of waiting for the first pipeline to complete, we kick off two more sub-pipelines- one with and one without change 1 so that we can start testing change 2 immediately. I call this a quantum pipeline because we can’t be sure which sub-pipeline is going to represent the state of the n+2 change until further observation is made of the n+1 pipeline.
As it happens, the n+1 pipeline completes successfully. When the n+1 pipeline completes, we can “collapse the wave” of the quantum pipeline and abort the unneeded sub-pipeline that assumed the n+1 pipeline would fail. Hopefully I’m not the only physics nerd in attendance. The wave could also be said to collapse if both of the sub-pipelines complete with the same state, either success or failure.
Looking at a third change, we can see how the quantum pipelines could potentially get more and more complex, but most of the pipelines will be aborted early depending on results from earlier pipelines.
So when n+1 completes successfully, half of the running sub-pipelines are aborted, the ones that were assuming a change 1 failure. When the n+2 pipeline fails, the associated n+3 sub-pipeline is also aborted, the one that assumed a change 2 success. Finally, the n+3 pipeline, which includes the n+1 change but not the n+2 change, completes and successfully commits.
We now have only changes that are successful within the entire pipeline being committed to trunk. It should be impossible for the trunk pipeline to ever be “red”, at least within the length of the cloned pipeline. I call this an evergreen trunk, because I can’t resist the pun. What’s more, these successful pipelines are exact copies of the trunk pipeline, so the artifacts from these builds can be used directly without being rebuilt in a trunk pipeline.
With cloneable pipelines, I can recreate pipelines for dependent pipelines and prevent breaks to downstream projects.
Say I’m log4j and want to make sure my changes don’t break tomcat.
Log4j could recreate the tomcat pipeline as part of its pipeline and reject changes based results including the tomcat pipeline.
Not a lot to say here, but I think there’s opportunity to communicate better across these types of boundaries with some of these concepts.
With so many pipelines running, won’t this take a lot of time? One way to cut down on the build times would be to have a central build service and use the “power of the swarm”.
The bottom build of common.jar refers to the build service and builds normally, returning the built jar. The middle build runs the same build of common.jar, but this time the build service returns the previously built jar as the code and dependencies are exactly the same. The top build runs with a change of one new file and the build service compiles the one file, adds it to the previously built jar to make a new jar artifact, then returns that jar to the running build.
Basically, all those pipelines are doing very similar work and could benefit from some level of sharing their work. A build service could do detailed analysis of individual compilable units and their dependencies and optimize the required work. And the build service infrastructure should be well-defined and reproducible for developers to support ‘airplane mode’.
Now this is where I get to the really far-out stuff. Hopefully I’ll blow some minds by the end here. If I haven’t already.
What I call extreme integration is an extension of the idea of extreme continuous integration to pipelines. If you aren’t familiar with extreme continuous integration, it’s a concept where as you are programming, a background process is continuously running your unit tests as you are typing so you get instant feedback on the state of your code without you even needing to explicitly run the tests. You get constant feedback on the state of your code with respect to its unit tests.
Similarly, extreme integration applies the same automatic feedback on a pipeline scale. So the first step of this idea would be to continuously run tests with your changes in a personal pipeline, represented here by the blue workspace pipeline. When the pipeline is green, its changes are migrated to the extreme integration pipeline as potential candidates for merging to trunk. So now you’ve got automatic testing of changes in your own pipeline.
The next step is to have the extreme integration pipeline also automatically merge changes from trunk. This way you can see what the state of your code is in relation to trunk. If your code is in conflict with changes to trunk, you can be immediately notified when the extreme integration pipeline fails. You can choose to continue working with your changes in your personal pipeline, but now you know that potentially the work you are doing is going to have more and more merge problems. Eventually you’ll want to integrate the changes from trunk to your personal workspace…
And resolve the problems which would then automatically be integrated back into the extreme integration pipeline. You can then choose to merge your changes to trunk.
One final thing I will throw out. If you want to have really extreme integration, you could have the extreme integration pipeline auto-merge your changes to trunk any time it turns green. Basically, as long as any changes you have made locally pass all testing in the pipeline with the latest from trunk, it automatically becomes a release candidate. It will be almost like you’re making changes to the live system, but with the confidence of an entire pipeline of testing having been performed backing up your changes. Obviously, you’ll want to make sure you have a really good system of testing and monitoring to try something like that.
Why develop locally at all? Why not have a cloud IDE?
Everyone works on modules/code/infrastructure remotely through a web-based front end.
Here’s a really simple mock-up of what I mean.
Create project, personal pipeline red
Create project, personal pipeline red
Create project, personal pipeline red
Create test-driven infrastructure test, personal pipeline red
Create test-driven infrastructure test, personal pipeline red
Create test-driven infrastructure test, personal pipeline red
Create platform, personal pipeline red
Add tomcat package, personal pipeline green, auto-merged to project pipeline
Create jar project
Create class
Enter test, personal pipeline red
Create jar project
Create class
Enter test, personal pipeline red
Create jar project
Create class
Enter test, personal pipeline red
Enter code
Add dependency with dynamic range, personal pipeline green, auto-merge to project pipeline
Create war project
Create cucumber test, personal pipeline red
Create war project
Create cucumber test, personal pipeline red
Create war project
Create cucumber test, personal pipeline red
Create JSP file
Add dependency on jar (automatically adds dynamic range), personal pipeline green, auto-merge to project
Add test-driven infrastructure test for war, personal pipeline red
Add test-driven infrastructure test for war, personal pipeline red
Add test-driven infrastructure test for war, personal pipeline red
Create deployment script, personal pipeline green, auto-merge to project pipeline
Use it to wire dependencies together.
It has automatic setup and provisioning for all the environments and pipelines needed.
It has continuous delivery built-in; not something users even need to think about.
Some object to working remotely, but I think these are solvable problems. Such as having push button access to hotspots and saving VMs or even whole environments for investigating problems. Also, when I say cloud IDE, this doesn’t preclude the system actually being based locally.
Kodingen.com is doing a little bit of this around code, but I don’t know too much about it and think there’s a lot more to be done as far as controlling infrastructure and pipelines with something like this.
These topics are a bit tacked on, but they are somewhat related things I have been thinking about.
Somewhat like public bug tracking, customers could write public BDD tests. Developers could flesh out the steps and a personal pipeline would let the reporter and product team know when the test is passing.
Compare functionality across products by having public BDD tests run against multiple products.
Public development- give public access to code, but not immune system
That’s all I have for today. I want to thank our sponsors for making this event possible today and I especially want to thank CloudBees for allowing me to come speak to you all and share my insanity. I can spend the rest of this time taking questions now.