Reproducible CI
From CitconWiki
Jump to navigationJump to searchReproducible CI
- Can I do this with Windows servers? Yes.
- Versioning CI tools by having a job that commits the CI configuration to source control every hour.
- Using Jenkins config history plugin for Jenkins gives you history of config, but only by using the plugin.
- Has someone upgraded the compiler or patched the OS? Has that impacted the build?
- Some clients spin up a VM from an image and set the clock to a known time, leaving nothing to chance.
What is the end goal of reproducible CI?
- Being able to bring back the CI environment if it comes away.
- Some businesses need to be able to reproduce the application years in the past.
- If you're seeing weird behavior with your application, it may be caused by the CI server, and this way you can see what was happening with the build server for a specific artifact.
- Some clients will want to stay on the same version for years and years, so you may need to recreate a build that hasn't been run in years.
- With large and/or remote teams, being able to see who made what change and why is critical, especially if something goes wrong.
- With CI/versioning, you can promote changes to your pipeline.
- If you can have a reproducible build system, developers can prove to themselves that what they are going to add won't break the build. No "it works on my machine" excuses.
Tools:
- Vagrat
- Docker
- Is there a need for a "meta-pipeline" (pipeline for the pipeline) that builds and promotes the build system?
- Build systems are products, consumed by developers, and testing needs to happen before the change is pushed to prod server.
- We don't hold the pipeline to the same processes we hold for the source code we're building.
- Is the issue that the team is too big?
- Do we want our team of ten managing a build server, or do we want them delivering value to customers?
- Do I want 100 teams with their own build server, or do a want a couple guys managing a build server for 1,000 developers?
- Having the small team manage their build system empowers them about the build and deployment process.
- Having a single team manage it for a huge group of developers gets the developers out of touch with the deployment process.
- Developers care more about getting stuff done over reproducibility -- they'll install something to "get it to work" and then forget how it got there.
- Developers manage their server, but the slaves are rebuild nightly.
- All teams have their own build server, but agree on a specific deliverable type and publish it to an artifact repository. There's a master build server that watches the repo and does the integration of all the components.
- Does anyone care about traceability?
- Getting the code is easy.
- What about the tests that run?
- How was the build server set up that day?
- We can reproduce what we built, can we reproduce how we deployed it? If we change all the build scripts? If we change what tool we used?
- Don’t do * includes if you can avoid it.
- Tool chain from checkin -> deployment to production
- SCM
- git
- svn
- hg
- ClearCase
- Perforce
- Build Server
- “Runs the build”
- Transform source + dependencies into an artifact. (May also run tests.)
- Compile, link, package, test
- Static Analysis
- Jenkins
- BuildForge
- AHP
- CruiseControl
- Artifact repo
- Nexus
- Artifactory
- S3
- SCM
- Leave it in the build tool
- Push it to deployment tool
- Infrastructure
- Would follow the same chain to go from “source” to “image.”
- Long lived physical environments that get set up once
- Virtual machines that are treated like long-lived physical machines.
- Stand up virtual instances per build
- A person might build the machine, build the golden image, build a custom script.
- Deploy
- Chef/Puppet
- Capistrano
- uDeploy
- Orc Deploy
- Shell/Perl/Python/Ruby/etc scripts
- A person does everything manually following a document
- Database
- Liquibase
- Rails Migrations
- Dacpac
- Play Evolution
- A person that updates the database.
- Test
- Performance
- Integration
- Unit
- Deployment/Smoke
- Functional
- Load
- Security
- Installation/Manifest Check (make sure all expected files exist)
- A person does everything manually following a document
- Bees with Machine Guns
- HP LoadRunner/WinRunner/etc
- Cucumber
- Selenium
- Fitnesse
- Spock
- UAT / Business Signoff
- Blue / Green deployments
- Canary deployment
- Person clicks a button
- Manual signing of document
- Deploy to Production?
- Same as deploy to non-production
- A person does a partial deployment
- Should a build server deploy?
- Yes
- Get it out there so people can use it
- No
- Slow tests aren’t run
- Deploy to one environment or many?
- Yes
- Conclusion: script all the things. Version all the things. You can do it on Windows.
- Treat the systems the same way you are (should be?) treating the software going to the users.
- It’s worth it for any size team: one person, ten people, a thousand people.