Sean's i/o Stream

Musings-as-a-Service

Why do I need UrbanCode Deploy if I am already doing continuous integration with xxx

  • February 28, 2016
  •   category: ibm
  • tags: deploy, anti-patterns

Was speaking to a number of folks this past week about what place does a solution like UrbanCode Deploy or generally a deployment tool have in a Jenkins, Maven, TFS world. The not so subtle implication here is that ant, gradle, rake, bash, powershell, MSBuild can already do all this stuff, so why pay for another tool.

In short, the answer is these tools are part of a solution but they do not do this well and is akin to asking why do I need a purpose built deployment tool as well. And the not so subtle implication I am making is that yes it is possible to wire together a deployment solution from more basic tools, the problem is that it tends to be a hack pretty quickly. This is by far the most common situation I enter into while I am out with clients.

Client: "We had a really good guy/gal that put together our <insert acronym here> tool, but until they left we did not understand how it worked or how fragile it was",
Me: "Ok, so how does it work"
Client: "Well first, we copy a build from this directory, to a directory over here. Then we open up file X and modify property/line Y. Then we can call all the automation scripts to deploy wherever we have just hard-coded it to."
Me: "Ok, so you are not versioning all your deployed configuration changes, also how do you know who did it, when, and where ?"
Client: "Oh that's easy we just go to that server and read the logs when something goes wrong, and only a few of us can even use that id so we can figure it out."

In this quick little dialogue the astute reader will see that I don’t usually need to dig to deep to find audit and compliance issues, but again this is not everyone but in 2014-2016 this represents a pretty good representation of the level of maturity I have been seeing here in North America for most clients. Clearly there are better and worse examples of this, the point remains many of you have “deployment” solutions that are based on a 1000 lines of ant/shell/powershell commands bolted onto the end of your build process and have to jump through proverbial hoops to make this “work” as desired.

While this is not limited to Microsoft solutions, they provide a good example of where this area of deployment has been completely avoided by Microsoft. I am not bashing Microsoft here, the Visual Studio suite has continued to be one of the most productive IDEs for years, the latest two interations of TFS have turned a 1990’s based version control server into a modern-ish platform and finally moved into the 21st century in the process adding integrated change management is a huge value add. The inclusion of a build system (TFS 2010) as well is also a bit late, but is welcome for any clients that did not already solve this problem with CruiseControl or Jenkins a few years ago when they were asking for this desperately already. Aquisitions also added Testing and Release management capabilities, but unfortunately have been built on the back of the Build process that is leaves somethings to be desired, and is still has not adopted ITIL best practices for artifact management in a good way.

Builds that do Deployments are not a solution

The next logical progression is for the build process to become more complex and begin to encompass the “deploy” process as well. I think this is a perfectly reasonable first step, but understand that this is unlikely a complete solution unless you have truly reached the end state of full-stack deployments, in which case you are on the extremely advanced end of the spectrum and already understand these concepts far better than any insights I am adding here :) For the rest of us in the wild, we make the progression to adding to the build processes some scripts, new tasks, or parameters to the build to make it “deploy” to the environments. The most common example of this pattern I see is using Jenkins with the Pipeline plugin, which makes a visual representation possible and improves the clarity of what is happening ten-fold over a standard Jenkins build process. The undesirable side-affect of this for me, is the mis-understanding that this is successful and sustainable deployment solution for anything more than a single application at a time.

What is your goal state?

To understand my objections a bit more, lets work backwards. What makes a successful solution, we can start at nirvana of a continuous delivery pipeline that provides all the machinery needed to take a change from source or config into a production environment, and not just the mechanical process, that is normally only a small part of the solution. The far more interesting pieces are the business process, audit compliance, visability, and goverance controls in place to allow for the automations to do the heavy lifting for you when the business is ready. If these seem like lofty goals, well for most organizations they are, but we need to set the target to what is the “best practice” there are lots of companies that are wildly successful at driving innovation and can delivery multiple changes to production daily. This is what is takes to execute this it does not require 100 hours a week working non-stop it requires removing waste and improving processes.

Now with this more holistic view in mind, if we look at a tool built on a “build” process we tend to fail miserably as “build” tools are good at, well building. They tend to be crap at understanding basic operational concepts that are required to meet anything but the most basic governance policies. Hence we end up with one build process, and a second “build/deploy” process being run by development versus operations, this is a classic anti-pattern and I only see this divide getting wider. There is an ongoing battle that development is getting better at continuous integration and ops is getting worse at keeping up, as more and more organizations are forced to adopt the development process being rammed down the ops team’s throats since they are too slow! The end results is that development is driving their agenda and tools left to right, versus the actual intent of shifting left operational processes and compliance earlier into the development process. The former actually hurts operational readiness and requires the additional of more manual intervention, whereas the later can actually help to remove organizational hurdles and remove manual checkpoints as you start to validate earlier and earlier in the process. Granted shifting operational processes left is harder, and due to many organizational decisions development organizations are inherently staffed and funded based on size of project versus the BAU type of support budget the operations team is normally constrained to. The team with more resources (time, money, effort, people, organizational impact) tends to dictate what is going to happen for better or for worse.

Conclusion

Less a conclusion and more a end of rant here, but understanding that deployment and release are different than automated build. Yes you can mechanically do deployment and release in simplistic environments with a build server, but you will not be producing sustainable results. Investing in a automation tool is a good start, but you really need to think about a release management practice not just how to move files around and run scripts.

Reference:
  • TFS - https://en.wikipedia.org/wiki/Team_Foundation_Server
  • http://www.asp.net/web-forms/overview/deployment/configuring-team-foundation-server-for-web-deployment/deploying-a-specific-build
Back to Top