Difference between revisions of "Reproducible CI"
From CitconWiki
Jump to navigationJump to search (Created page with "'''Reproducible CI''' Can I do this with Windows servers? Yes. Versioning CI tools by having a job that commits the CI configuration to source control every hour. Using conf...") |
|||
Line 1: | Line 1: | ||
'''Reproducible CI''' | '''Reproducible CI''' | ||
− | Can I do this with Windows servers? Yes. | + | * Can I do this with Windows servers? Yes. |
− | Versioning CI tools by having a job that commits the CI configuration to source control every hour. | + | * Versioning CI tools by having a job that commits the CI configuration to source control every hour. |
− | Using config history plugin for Jenkins gives you history of config, but only by using the plugin. | + | * Using Jenkins config history plugin for Jenkins gives you history of config, but only by using the plugin. |
− | Has someone upgraded the compiler or patched the OS? Has that impacted the build? | + | * Has someone upgraded the compiler or patched the OS? Has that impacted the build? |
− | Some clients spin up a VM from an image and set the clock to a known time, leaving nothing to chance. | + | * Some clients spin up a VM from an image and set the clock to a known time, leaving nothing to chance. |
− | What is the end goal of reproducible CI? | + | '''What is the end goal of reproducible CI?''' |
+ | |||
+ | * Being able to bring back the CI environment if it comes away. | ||
− | + | * Some businesses need to be able to reproduce the application years in the past. | |
+ | * If you're seeing weird behavior with your application, it may be caused by the CI server, and this way you can see what was happening with the build server for a specific artifact. | ||
+ | * Some clients will want to stay on the same version for years and years, so you may need to recreate a build that hasn't been run in years. | ||
+ | * With large and/or remote teams, being able to see who made what change and why is critical, especially if something goes wrong. | ||
+ | * With CI/versioning, you can promote changes to your pipeline. | ||
− | + | * If you can have a reproducible build system, developers can prove to themselves that what they are going to add won't break the build. No "it works on my machine" excuses. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | If you can have a reproducible build system, developers can prove to themselves that what they are going to add won't break the build. No "it works on my machine" excuses. | ||
Tools: | Tools: | ||
Line 26: | Line 26: | ||
* Docker | * Docker | ||
− | Is there a need for a "meta-pipeline" (pipeline for the pipeline) that builds and promotes the build system? Build systems are products, consumed by developers, and testing needs to happen before the change is pushed to prod server. | + | * Is there a need for a "meta-pipeline" (pipeline for the pipeline) that builds and promotes the build system? |
+ | ** Build systems are products, consumed by developers, and testing needs to happen before the change is pushed to prod server. | ||
− | We don't hold the pipeline to the same processes we hold for the source code we're building. | + | * We don't hold the pipeline to the same processes we hold for the source code we're building. |
− | Is the issue that the team is too big? Do we want our team of ten managing a build server, or do we want them delivering value to customers? Do I want 100 teams with their own build server, or do a want a couple guys managing a build server for 1,000 developers? | + | * Is the issue that the team is too big? |
− | + | ** Do we want our team of ten managing a build server, or do we want them delivering value to customers? | |
− | + | ** Do I want 100 teams with their own build server, or do a want a couple guys managing a build server for 1,000 developers? | |
− | |||
− | |||
− | |||
− | + | * Having the small team manage their build system empowers them about the build and deployment process. | |
− | + | * Having a single team manage it for a huge group of developers gets the developers out of touch with the deployment process. | |
− | + | * Developers care more about getting stuff done over reproducibility -- they'll install something to "get it to work" and then forget how it got there. | |
− | + | ** Developers manage their server, but the slaves are rebuild nightly. | |
+ | ** All teams have their own build server, but agree on a specific deliverable type and publish it to an artifact repository. There's a master build server that watches the repo and does the integration of all the components. | ||
− | + | * Does anyone care about traceability? | |
+ | ** Getting the code is easy. | ||
+ | ** What about the tests that run? | ||
+ | ** How was the build server set up that day? | ||
− | + | * We can reproduce what we built, can we reproduce how we deployed it? If we change all the build scripts? If we change what tool we used? | |
− | + | * Don’t do * includes if you can avoid it. | |
− | SCM | + | * Tool chain from checkin -> deployment to production |
− | git | + | ** SCM |
− | svn | + | ** git |
− | hg | + | ** svn |
− | ClearCase | + | ** hg |
− | Perforce | + | ** ClearCase |
− | Build Server | + | ** Perforce |
− | “Runs the build” | + | * Build Server |
− | Transform source + dependencies into an artifact. (May also run tests.) | + | ** “Runs the build” |
− | Compile, link, package, test | + | ** Transform source + dependencies into an artifact. (May also run tests.) |
− | Static Analysis | + | ** Compile, link, package, test |
− | Jenkins | + | ** Static Analysis |
− | BuildForge | + | ** Jenkins |
− | AHP | + | ** BuildForge |
− | CruiseControl | + | ** AHP |
− | Artifact repo | + | ** CruiseControl |
− | Nexus | + | * Artifact repo |
− | Artifactory | + | ** Nexus |
− | S3 | + | ** Artifactory |
− | SCM | + | ** S3 |
− | Leave it in the build tool | + | ** SCM |
− | Push it to deployment tool | + | ** Leave it in the build tool |
− | Infrastructure | + | ** Push it to deployment tool |
− | Would follow the same chain to go from “source” to “image.” | + | * Infrastructure |
− | Long lived physical environments that get set up once | + | ** Would follow the same chain to go from “source” to “image.” |
− | Virtual machines that are treated like long-lived physical machines. | + | ** Long lived physical environments that get set up once |
− | Stand up virtual instances per build | + | ** Virtual machines that are treated like long-lived physical machines. |
− | A person might build the machine, build the golden image, build a custom script. | + | ** Stand up virtual instances per build |
− | Deploy | + | ** A person might build the machine, build the golden image, build a custom script. |
− | Chef/Puppet | + | * Deploy |
− | Capistrano | + | ** Chef/Puppet |
− | uDeploy | + | ** Capistrano |
− | Orc Deploy | + | ** uDeploy |
− | Shell/Perl/Python/Ruby/etc scripts | + | ** Orc Deploy |
− | A person does everything manually following a document | + | ** Shell/Perl/Python/Ruby/etc scripts |
− | Database | + | ** A person does everything manually following a document |
− | Liquibase | + | * Database |
− | Rails Migrations | + | ** Liquibase |
− | Dacpac | + | ** Rails Migrations |
− | Play Evolution | + | ** Dacpac |
− | A person that updates the database. | + | ** Play Evolution |
− | Test | + | ** A person that updates the database. |
− | Performance | + | * Test |
− | Integration | + | ** Performance |
− | Unit | + | ** Integration |
− | Deployment/Smoke | + | ** Unit |
− | Functional | + | ** Deployment/Smoke |
− | Load | + | ** Functional |
− | Security | + | ** Load |
− | Installation/Manifest Check (make sure all expected files exist) | + | ** Security |
− | A person does everything manually following a document | + | ** Installation/Manifest Check (make sure all expected files exist) |
− | Bees with Machine Guns | + | ** A person does everything manually following a document |
− | HP LoadRunner/WinRunner/etc | + | ** Bees with Machine Guns |
− | Cucumber | + | ** HP LoadRunner/WinRunner/etc |
− | Selenium | + | ** Cucumber |
− | Fitnesse | + | ** Selenium |
− | Spock | + | ** Fitnesse |
− | UAT / Business Signoff | + | ** Spock |
− | Blue / Green deployments | + | * UAT / Business Signoff |
− | Canary deployment | + | ** Blue / Green deployments |
− | Person clicks a button | + | ** Canary deployment |
− | Manual signing of document | + | ** Person clicks a button |
− | Deploy to Production? | + | ** Manual signing of document |
− | Same as deploy to non-production | + | * Deploy to Production? |
− | A person does a partial deployment | + | ** Same as deploy to non-production |
+ | ** A person does a partial deployment | ||
− | Should a build server deploy? | + | * Should a build server deploy? |
− | Yes | + | ** Yes |
− | Get it out there so people can use it | + | *** Get it out there so people can use it |
− | No | + | ** No |
− | Slow tests aren’t run | + | *** Slow tests aren’t run |
− | Deploy to one environment or many? | + | *** Deploy to one environment or many? |
− | Conclusion: script all the things. Version all the things. You can do it on Windows. | + | * Conclusion: script all the things. Version all the things. You can do it on Windows. |
− | Treat the systems the same way you are (should be?) treating the software going to the users. | + | ** Treat the systems the same way you are (should be?) treating the software going to the users. |
− | It’s worth it for any size team: one person, ten people, a thousand people. | + | ** It’s worth it for any size team: one person, ten people, a thousand people. |
Latest revision as of 13:43, 24 August 2013
Reproducible CI
- Can I do this with Windows servers? Yes.
- Versioning CI tools by having a job that commits the CI configuration to source control every hour.
- Using Jenkins config history plugin for Jenkins gives you history of config, but only by using the plugin.
- Has someone upgraded the compiler or patched the OS? Has that impacted the build?
- Some clients spin up a VM from an image and set the clock to a known time, leaving nothing to chance.
What is the end goal of reproducible CI?
- Being able to bring back the CI environment if it comes away.
- Some businesses need to be able to reproduce the application years in the past.
- If you're seeing weird behavior with your application, it may be caused by the CI server, and this way you can see what was happening with the build server for a specific artifact.
- Some clients will want to stay on the same version for years and years, so you may need to recreate a build that hasn't been run in years.
- With large and/or remote teams, being able to see who made what change and why is critical, especially if something goes wrong.
- With CI/versioning, you can promote changes to your pipeline.
- If you can have a reproducible build system, developers can prove to themselves that what they are going to add won't break the build. No "it works on my machine" excuses.
Tools:
- Vagrat
- Docker
- Is there a need for a "meta-pipeline" (pipeline for the pipeline) that builds and promotes the build system?
- Build systems are products, consumed by developers, and testing needs to happen before the change is pushed to prod server.
- We don't hold the pipeline to the same processes we hold for the source code we're building.
- Is the issue that the team is too big?
- Do we want our team of ten managing a build server, or do we want them delivering value to customers?
- Do I want 100 teams with their own build server, or do a want a couple guys managing a build server for 1,000 developers?
- Having the small team manage their build system empowers them about the build and deployment process.
- Having a single team manage it for a huge group of developers gets the developers out of touch with the deployment process.
- Developers care more about getting stuff done over reproducibility -- they'll install something to "get it to work" and then forget how it got there.
- Developers manage their server, but the slaves are rebuild nightly.
- All teams have their own build server, but agree on a specific deliverable type and publish it to an artifact repository. There's a master build server that watches the repo and does the integration of all the components.
- Does anyone care about traceability?
- Getting the code is easy.
- What about the tests that run?
- How was the build server set up that day?
- We can reproduce what we built, can we reproduce how we deployed it? If we change all the build scripts? If we change what tool we used?
- Don’t do * includes if you can avoid it.
- Tool chain from checkin -> deployment to production
- SCM
- git
- svn
- hg
- ClearCase
- Perforce
- Build Server
- “Runs the build”
- Transform source + dependencies into an artifact. (May also run tests.)
- Compile, link, package, test
- Static Analysis
- Jenkins
- BuildForge
- AHP
- CruiseControl
- Artifact repo
- Nexus
- Artifactory
- S3
- SCM
- Leave it in the build tool
- Push it to deployment tool
- Infrastructure
- Would follow the same chain to go from “source” to “image.”
- Long lived physical environments that get set up once
- Virtual machines that are treated like long-lived physical machines.
- Stand up virtual instances per build
- A person might build the machine, build the golden image, build a custom script.
- Deploy
- Chef/Puppet
- Capistrano
- uDeploy
- Orc Deploy
- Shell/Perl/Python/Ruby/etc scripts
- A person does everything manually following a document
- Database
- Liquibase
- Rails Migrations
- Dacpac
- Play Evolution
- A person that updates the database.
- Test
- Performance
- Integration
- Unit
- Deployment/Smoke
- Functional
- Load
- Security
- Installation/Manifest Check (make sure all expected files exist)
- A person does everything manually following a document
- Bees with Machine Guns
- HP LoadRunner/WinRunner/etc
- Cucumber
- Selenium
- Fitnesse
- Spock
- UAT / Business Signoff
- Blue / Green deployments
- Canary deployment
- Person clicks a button
- Manual signing of document
- Deploy to Production?
- Same as deploy to non-production
- A person does a partial deployment
- Should a build server deploy?
- Yes
- Get it out there so people can use it
- No
- Slow tests aren’t run
- Deploy to one environment or many?
- Yes
- Conclusion: script all the things. Version all the things. You can do it on Windows.
- Treat the systems the same way you are (should be?) treating the software going to the users.
- It’s worth it for any size team: one person, ten people, a thousand people.