Chris James - Software developer and other things

Why you should deploy on Friday afternoon

28 May 2018

I originally posted this at Dev.to where lots of people agreed and disagreed.

Don't deploy on Friday afternoons!

This expression is taken as programmer wisdom but I hate it. I'm going to try and kill it, with words and experience.

The motivation behind it is sound. I don't want to spend my Friday nights debugging a production problem either.

To me the expressions smacks of unprofessionalism. Software development as an industry has a poor reputation and phrases like this do not help.

If you went to an ER on a Friday afternoon and was turned away because the doctors don't trust their own tools and know-how, how would you feel?

If we want people to take our craft seriously we need to own it and not give the impression that we dont understand the systems are we making enough to make changes at the end of the week.

Why a lot of people don't want to deploy on Fridays

Continuous Delivery (CD)

I have worked on teams that have deployed new versions of various services in a distributed system multiple times at 4:30pm and not broke a sweat.

Why? Because deployment is fully automated and is a non-event. Here is the groundbreaking process.

Not so long ago it was considered normal for there to be releases every 6 months, or even just one at the end of a project.

The forward thinkers in that age saw problems with this

So the industry as a whole worked on lots of tooling, techniques and best practices to allow us to release software far quicker.

Recognising that releasing often reduces risk is generally accepted nowadays but teams still often settle on weekly or fortnightly releases; often matching the cadence of their sprints.

What are the problems with weekly/fornightly releases?

With CD we recognise that we can go further, deploying new software to live every time the build is green. This has some amazing benefits,

But what if you break things?

Often people say with CD

Yeah it's nice but what if it breaks? We should have a QA check things over

Here's the thing, no process in the world prevents bugs. You will ship broken code. What's really important is how quickly you can detect and recover from it. Hoping manual testing will catch everything is wishful thinking.

How to CD on a new project

It is much easier to do CD on a new project since you can start small and evolve.

Generally your work should be centered on delivering the most valuable user journeys first, so this is an excellent chance to practice how to ensure that feature works without any humans checking anything.

For each subsequent story ask yourself

How to CD on an existing project

Peel away at the manual process

CD up to staging.

Some companies have many environments in their delivery pipeline. A good first start is to automatically ship all the way up to the environment before live. A better step is remove as many of them as you can. It's ok to have some kind of "dev" environment to maybe experiment with but ask yourself why cant just test these things locally in the first place.

Identify a sub-system you could work with as a starting point

If you're working with a distributed system you might be able to identify a system which is easier to CD than the rest. Start with that because it'll give your team some insights into the new way of working and can help you begin to break the cultural barriers.

CD is a cultural issue as much as a technical one

Roles and responsibility

Often a product owner or project manager wants to be the one who is in charge of releasing.

There are circumstances where exposing features to users should be controlled by a non-technical member of your team, but this can be managed with feature toggles.

But the copying of code from one computer to another is the responsibility of the developers on the team. After all we are the ones who are responsible for making sure the system works. It is a technical concern, not a business one.

What do QAs do now?

CD is actually liberating for QAs

Re-evaluate your tolerance for defects

Lots of companies think they cannot have any defects and will spend a lot of time and effort on complicated, time consuming (and therefore expensive) processes to try and stop them.

But think about the cost of all this? If you push a change to production that isn't covered by tests, perhaps a CSS change; consider if it's really catastrophic if there's a small visual fault for some browsers

Maybe it is, in which case there are techniques to test specifically for this too.

Recovery

Each release you do with CD will have the following qualities

So in my experience fixing anything that falls through the cracks is easy. It's much less complicated than trying to look through 2 week's worth of git history.

I would recommend in most cases not rolling back (unless it's really bad), but just fixing the issue and releasing it. Rollback is sometimes not an option anyway (e.g database migrations) so the fact that your system is geared to releasing quickly is actually a real strength of CD.

Other quick tips

Wrapping up

This has been a small introduction to CD, it's a huge topic with plenty of resources to investigate.

Continuous delivery is a big effort both technically and culturally but it pays off massively.

I have worked on distributed systems with over 30 separately deployable components with CD and it has been less stressful than another project I've worked on that had just a handful of systems but a ton of process and ceremony.

Being able to release software when it's written puts a higher emphasis on quality and reduces risk for the team. It also forces you into using agile best practices like testable, small, independently releasable user stories.

Maybe most importantly, it demands that you understand your system and that you're not deploying to production and crossing your fingers. That feels more professional to me.