Chapter 11: Additional Services - Jack and the Beanstalk! (DevOps, CI/CD)




Chapter 11 has a mixture of services in it. Today I was asked about CI/CD architecture for cloud and was struggling to relate what I know from past experience, AWS and the "architecture" question so decided it was time to see what the "book" knows about DevOps. Not much as it turns out, just a few pages devoted to DevOps, covering AWS OpsWorks, AWS CloudFormation and AWS Elastic Beanstalk.

It's no so much that I haven't come across CI/CD but that it's just "everywhere" so the question of architecture is interesting.  Then there's DevOps, the same or different?

A few years ago we starting doing some work for a government department to see if we could integrate automatic performance modelling from APM data into their DevOps pipeline. They guys are pretty sophisticated in terms of their tools, they have an in-house APM solution in use from Development, Test, Load Testing, to Production. They use testing tools etc. The catch is that they wanted to improve the frequency of code updates and make more deployments more frequently. The catch was that the testing still takes the same length of time. We were trying to speeds things up by supplementing formal testing with APM+modelling as soon as possible in the DevOps cycle, ideally as close to the developers as possible. E.g. just after unit performance testing, as we can then combine the APM data from unit tests with the baseline performance model built from current production data to get early warnings about SLA violations under productions load with the new+old code, and capacity etc.

However, how does a DevOps pipeline really work in practice?   A useful paper I found last year is:

Continuous Delivery: Overcoming adoption challenges, by Lianping Chen 

http://www.sciencedirect.com/science/article/pii/S0164121217300353

We used this paper as a basis to motivate and explain our attempt to integrate modelling with DevOps.  My simple explanation is that whenever you get a code "commit" from a developer you start the automatic pipeline running for compilation, unit tests, integration builds, integration tests, functional tests, performance tests, and then if everything passes then deploy to production (or something more nuanced like Blue/Green/Canary etc). If something breaks you have to go back until you find and fix the problem and retry. If something breaks in production you have to be able to roll back. If you are using Canary testing then you have to have automated APM compared with baselines and automatically roll back is there's an issue before rolling out further.

There's therefore questions of what counts as a commit, how often to start the pipeline (really every commit or every few commits or every 30 minutes max etc), and how many pipelines to have going concurrently and what to do with them all at the end etc.

The historical motivation behind CI is that if you wait too long for too many branched code changes from too many developers you have a great big mess. It's easier to handle a few changes over a short time than lots over a long time (in theory some code many not even be compatible eventually).

So it comes down to automation (tests, integration, compilation, deployment, etc), consistency (knowing what you have, in terms of h/w and s/w) and concurrent resource availability. 

What components are typically used for code version control and integration, building, testing, and deployment? Cloud infrastructure deployment?

AWS OpsWorks is for configuration management of applications. It's built on Chef (3 Michelin stars I hope). It's based on orchestrations to achieve goal states (TODO Check this).

AWS CloudFormation is for deploying AWS infrastructure resources and allows version control for infrastructure - cool.  It's declarative based on JSON templates.

So simply, CloudFormation is for setting up your AWS infrastructure environment, and OpsWorks is for deploying and managing the systems/applications running inside the infrastructure.

A silly question: How are the applications (from OpsWorks) mapped to the infrastructure (from CloudFormation)? Do you need to use both or is one sufficient?

This all sounds complicated, can you visualise any of it?
Looks like there are visual editors including visual studio and eclipse (yeah).
And this article suggests a few options including AWS CloudFormation Designer.

Oh I forgot one, AWS Elastic Beanstalk (plant a bean but watch for the giant I guess).
This is even more automatic and is a PaaS for deploying (some) applications automatically managing infrastructure resources for them. I read somewhere that it uses CloudFormation templates (TODO ?)

This presentation introduces evern more related services and possibly explains the relationships better:

A slide illustrating all the CI/CD services. This implies that Beanstalk and OpsWorks cover the same areas (deployment, provision and monitoring), CloudFormation is for infrastructure provisioning only, CodeDeploy (what's that?) is for code deployment, and CodeCommit and CodePipeline are earlier in the pipeline.


Looks to me that I've only scratched the surface of DevOps and CI/CD on AWS.
I'd better get that AXE ready in case of giants!!!

PS
I'd forgotten about a particularly relevant book by some ex-colleagues from NICTA (Now Data61).
What Software Architects Need to Know About DevOps
Read it or grow moss.

There are two excerpts from the book available at:
http://www.informit.com/articles/article.aspx?p=2350702
http://www.informit.com/articles/article.aspx?p=2424801

There are also possibly aspects to the "DevOps architecture" question. How does DevOps interact with "Architecture"? (i.e. what architectural approaches could enable DevOps) and "What does the architecture of DevOps" look like? W.r.t. the 1st meaning, the use of a microservices architecture can be an enabler for faster DevOps as a commit becomes an architectural unit of work (?) maybe a better fit between development changes and DevOps requirements.  I've actually realised I don't understand what a "commit" is in the DevOps sense. Maybe this needs better definition?

P2S
Well squeeze a lemon on my head! (Hitchhiker's guide, Zaphod's lemon zest thinking cap or "Zaphod Thinks!")

Zaphod Thinks! (briefly, lemon powered brain)



Also couldn't resist this (how to stop a Dalek in it's tracks)



Immutability has been a topic in the software engineering world for a while but I must have been under a rock. Sure, I remember immutable objects (not mutable), soldered electronic circuits, and Kinesis which has an append only immutable data stream. Actually I DON'T remember immutable objects from Java. Turns out Java doesn't have an immutable object keyword, only a pattern, for example.  Some other languages do have immutability. But now I (think) I do remember immutability from functional programming languages (these appear to have entered the mainstream from academia more recently).

What I hadn't realised was that immutability is potentially a core fundamental knock your socks of architectural, DevOps, cloud principle. As usual immutability has been around since the 1st computer - anything that is impossible to change (and only grows by accretion) is immutable.  Actually the dictionary definition of immutable means unchanging or unable to change. I think what we mean in computer science now is that change is happening ALL THE TIME, but you don't even try to change anything that you have, you just add new stuff (see accretion).   I.e. the current stuff becomes old and the new stuff becomes well, the old stuff plus the new stuff.
Some thoughts from other people on the subject:

https://blog.chef.io/2014/06/23/immutable-infrastructure-practical-or-not/ 
and
https://medium.com/react-weekly/embracing-immutable-architecture-dc04e3f08543
and
https://www.nginx.com/blog/devops-and-immutable-delivery/
and
https://cloudonaut.io/a-pattern-for-continuously-deployed-immutable-and-stateful-applications-on-aws/
(cloudcraft looks like an interesting tool to try)


What would a completely consistent immutably principled s/w engineering stack, tools and process look like? Is it possible? Would it work? Would it be useful? I guess the idea is that you only ever add code changes (or architectural, infrastructure, configuration etc changes), and never change anything that was working before. In theory you never break anything that was working. However, you may end up with lots of  "versions" for lack of a better name of things so the complexity would have to increase over time (and size etc). Could you (and is there already?)  have continuous and automatic detection and failover of all production code/applications to previous working versions? So you could almost throw "junk" into production and even it breaks (briefly) the system will still keep running for most users resiliently...

I see Elastic Beanstalk has Immutable updates.
And an Amazon presentation.

Oh, and S3 objects can be immutable. Actually not sure exactly what this means in S3 context as the documentation does NOT MENTION that they are immutable. But this does:
https://forums.aws.amazon.com/thread.jspa?threadID=46710

The tiny little problem I see with some of this (and have noted in the past with our attempt to integrate performance modelling in DevOps), is that you can't in practice isolate all code changes to small self-contained portions of code. Something you have to change frameworks, workflows, etc. Does this matter? Maybe you can treat everything as immutable in practice? For larger cross-cutting code changes maybe they can just be rolled out incrementally as infrastructure and code is subject to continuous incremental immutable updates?


P3S
Back in the dark ages I was involved in a Grid infrastructure evaluation at UCL in the UK in 2004. One ot things we discovered was that the Grid "service oriented architecture" was really designed to manage resources via services (e.g. infrastructure, etc). We wanted to get it to work to enable deployment of end-user code across available resources to be consumed on demand and load balanced etc. It needed to be able to take code (packaged for deployment as services), find resources, deploy code and publish services to each resource and in a global registry, and enable discover and binding and invocation of the services by end-user applications.  We developed a prototype system to do this on top of the OGSA services. However, we also tried using a few other technologies available at the time, including something called SmartFrog (from HP from memory) which appears to be have an early version of  Chef!

https://xebialabs.com/technology/smartfrog/
It's got a cool tool to compare tools.

E.g.
SmartFrog
vs.
Chef
Oh, covers the exactly same areas.

There's a periodic table, SmartFrog is 32.

A tool like this for cloud services, event just all AWS services would be interesting to see how they all fit together.

I've embedded it, clickable:






Comments

  1. This was truly awesome. thanks so much for this!!AWS Online Training

    ReplyDelete
  2. The information about the process of devops and you work for govt institution it seems you servicing for the seekers who want to know about devops.
    devops training in bangalore

    devops training in btm layout

    ReplyDelete
  3. The knowledge regarding the technique of devops as a service so you help federal company it seems like anyone offering to the seekers who want to be familiar with devops.

    ReplyDelete
  4. GREAT ARTICLE! It is really interesting to read from the beginning & I would like to share your blog to my circles, keep sharing… DevOps Online Training
    DevOps Online Training

    ReplyDelete
  5. Nice post! Thanks for sharing valuable information with us. Keep sharing..
    AWS Online Training

    ReplyDelete
  6. Thanks for sharing an excellent post, which is helped to me. Surely I suggest to this blog for my friends and I got extra knowledge from your post. Keep it up and I like more new posts...
    Linux Training in Chennai
    Linux Course in Chennai
    Pega Training in Chennai
    Primavera Training in Chennai
    Unix Training in Chennai
    Embedded System Course Chennai
    Linux Training in OMR
    Linux Training in Velachery

    ReplyDelete
  7. Excellent blog. Very good information and explained clearly.
    Devops Training
    Devops Online Training

    ReplyDelete
  8. This is totally Great Information,
    Thanks for Sharing.
    For additional information, if you need Syncronize Genset to support your event or something else. You can try to contact us Arthur Teknik.

    ReplyDelete
  9. Usually I never comment on blogs but your article is so convincing that I never stop myself to say something about it. You’re doing a great job Man, Keep it up.
    laptop screen repair in lb nagar

    ReplyDelete
  10. Want to change your career in Selenium? Red Prism Group is one of the best training coaching for Selenium in Noida. Now start your career for Selenium Automation with Red Prism Group. Join training institute for selenium in noida.

    ReplyDelete

Post a Comment

Popular posts from this blog

Which Amazon Web Services are Interoperable?

AWS Certification glossary quiz: IAM

AWS SWF vs Lambda + step functions? Simple answer is use Lambda for all new applications.