Cloud Migration (do clouds actually migrate?)



The late New Zealand prime minister Robert Muldoon once quipped that the annual emigration of Kiwis (people, not birds) to Australia raised the average IQ of both countries.




(Just think about it.... Whoops, I'm a Kiwi who migrated to Australia 30+ years ago, oh well).

Cloud migration - do clouds really migrate? This seems like an "odd" concept, I guess clouds get blown around a lot, maybe even from one country to another?!

Over the last 10 years I've been working in the "cloud space", mainly with government (but a also a few) enterprise clients, looking at questions of performance, scalability, architecture, re-architecting for migrations, migrating to/from various cloud or in-house platforms, etc.

Here are some very preliminary random thoughts on the possible strategies and tactics for encouraging migration that is cost effective and works (hopefully):

Why move to the cloud and what sort? (Public/"private")

Some of the substantial benefits of public cloud platforms such as AWS, Google, Azure, etc, is that they are highly scalable, elastic and agile. I think this is the key reason for moving an enterprise application to the cloud, but also the obvious reason why startups have been one of the main drivers of innovation for public cloud.   These characteristics are useful for some workloads and applications in production (e.g. changes in load over time, peak loads over part of a day/week, unpredictable load spikes, and being able to spin of a large set of resources for a small period to solve some problem faster that you just don't resources for otherwise, i.e. on-demand).  However, this isn't the whole story.

DevOps/CI/CD

I think that improvement to DevOps/CI/CD is a key factor in using public cloud. You can easily spin up multiple identical environments for different development, testing, etc purposes and then pull them down just as fast. You only pay for what you use.  For example Blue/Green and A/B Testing are obvious candidates (and these ideas can be extended and combined).   This makes it easier to rapidly deploy more frequent releases of higher quality changes.


Automation

Automation of DevOps in the cloud is a cost. It will take time and money and new skill sets to automated deployment and IaaS management even in a cloud environment. But once done everything is then repeatable and less error prone. This is one of the key advantages to traditional in-house hosting. The other "downside" however is that in the AWS (e.g.) world just about everything is "self-service" and there is a potential mixing of traditional roles. To pass the AWS solution architecture certification (e.g.) you need to have a mixture of skills (both experience, theoretical and AWS specific) around s/w architecture (tick), technologies (tick), network administration (oh, nope), systems administration (nope, long time ago), and security administration (nope). In large enterprises and probably government there has traditionally been a more specialised approach to these skills. A s/w architect is unlikely to have been involved in hands on provisioning of server instances. How would you best develop multi-skilled DevOps cloud teams in the future is an interesting question.

Serveless architectures

I think this is where the future of cloud is. Serveless architectures allow for massive scale, you don't need to even think about IaaS and sys admin questions etc. However, porting existing enterprise applications to serverless is non-trivial. Best to start out writing some new applications (or enhancements to existing apps) on serverless to get some real experience. Tiger teams?

Cost

Everyone knows that it's drt cheap to run stuff on AWS right? Well no, it depends. Numerous startups have had the experience of starting on AWS, moving off to other options (e.g. private), then eventually ending up with in-house hosting as they get bigger. There are also other restrictions as size and complexity on AWS grows including cost of APM monitoring (many companies have had to change APM products as vendors license models start costing too much on cloud).  Typically cloud is cheaper for startups with small loads, for spikey unpredicatable loads, for on-demand large problems (e.g. spin up a cluster for scientific computing for a few hours). However, for large constant workloads cloud may actually be more expensive. The other issue is "bill shock".  Some companies have reported having large (x4 e.g.) changes in cost from month to another as a result of changes in their application code or workload etc.  This is hard to predict and harder to prevent.

How do you know in advance and ongoing what your costs will be? The costing structure for AWS (e.g.) is complex and changes without notice. It is essentially a critical architectural feature. Before migrating an application I suggest monitoring with a good quality APM (E.g. Dynatrace), and modelling for future workloads and changes in resource usage and costing to the target cloud platforms. Don't forget to take network charges, storage charges, monitoring etc into account. Look at different options that the cloud provider has for charging and paying, etc as these can all impact price.  Then try out a PoC for part of the system and see if the predicted price corresponds to reality for a few months and then iterate.

Public vs Private

This is another conundrum. Initially I thought "private" cloud was a contradiction in terms as public cloud has all the possible advantages including economy of scale, elasticity, price (downwards), unlimited size of everything ("infinite"), automation, robust distributed platforms and security, etc.  Actually I still think this. Private clouds have some advantages including "higher" security levels (e.g. for classified data), but watch out that your applications are themselves secured correctly to run on the public internet even on private cloud, local data storage (so you don't have your data going offshore, interstate, etc), and local support. However, I think this will cost more in the long run and you will loose agility and scale. How elastic are private clouds? How robust and available? Can run things across multiple "regions"? Can you spin out a few 100 instances for disaster recovery (if the whole local data centre has gone down) and larg occasional jobs on demand? Have you continual tested that this works in practice?  There's also advantages (depending on where your customers are) being able to run applications across multiple geographical locations (e.g. different continents).  How does this differ from just running a "cloud" in your own data centre anyway? Private cloud for IaaS only could make sense, particularly if your workloads are constant or change only slowly.  However, I just don't think private clouds will be able to keep up with the rate of innovation for PaaS and serverless and SaaS offerings  There is also a risk that the bigger public cloud vendors could just decided to package up a "classified level secured cloud in a shipping container" and drop it off locally and start competing with private cloud vendors (if they though the economics stacks up, I suspect that they don't).

What could change all this? Maybe a higher level of security protocols may be invented that provides classified and higher level security over the public internet allowing public clouds or offer more secure services?  Possibly combined with changes in laws around data location and/or ability to guarantee where data is located and where it's been for public clouds? Possibly innovations around hardware (e.g. compute speeds and data storage densities, e.g. quantum or optical or biological computers?) making it suddenly cheaper to run an almost infinitely powerful datacentre on your desktop (or on your phone, or in your head!?)  Maybe some type of Peer to peer cloud computing may take off (E.g. similar to p2p bitcoin blockchains, there is now a blockstore like distributed database, what if computing etc becomes a similar commodity?)
https://www.bigchaindb.com/whitepaper/bigchaindb-whitepaper.pdf
http://www.zdnet.com/article/blockchains-in-the-database-world-what-for-and-how/

From a few years ago there was even a cloud on a single chip.

Cognitive dissonance

What's this? This is when you get a headache due to rapid changes in architectural principles and technologies and find that what you thought was "obvious" 10 years ago has all changed. I'm particularly thinking about the architectural and design differences between traditional enterprise technologies (e.g. stateful n-tiered thin-client applications) which are still very common in the real world and government systems, and the new world of extreme internet scale stateless RESTful architectures. In practice this requires quite a different mindset to architect and implement and it is difficult to painlessly bridge the gap. Where do you get this experience from? How do you migrate and re-architect from traditional enterprise architectures to "internet" cloud platforms? (Other than lift and shift?)

Some thoughts:

Why you still need state: https://petabridge.com/blog/stateful-web-applications/
And older white paper on migration which address state problems.
A more recent white paper on migration (but doesn't mention state problem)
Microservices, docker and state;

Our SaaS performance modelling tool is probably a good of example of a sophisticated html5 based application with complex state management. It has evolved from a standalone Java application (10 years ago) through various architectural and technology changes.  The early SaaS version (now legacy!) was Dojo and custom D3 charts. However, over the last few years we found that this was slow and unreliable synchronisation between the browser and server state.  The browser javascript is complex and has the potential to need to load 1000 of model components from the server, zoom and move around and edit them all the while keeping changes in synch with the server. We have now re-architected it to use React/Flux/mobx state management, with React and D3 charts.

This appears to work better (faster and more reliable) but was a significant amount of work to upgrade (although both frameworks seem happy to coexist at the same time).  This looks "fat clients" to me - again :-) (They seem to come and go). You need to load lots of stuff from the server (constantly), lots of the processing is done on the browser computer, the state is managed on the browser, etc. This puts demands on the client resources (e.g. interactive high rate frame (60PFS) applications is sometimes tricky to achieve, there are a lot of moving parts, content and javascript to be loaded from the server, i.e. loading, scripting, rendering, painting, other, to achieve a smooth interactive experience). And this may not work as well on tablets or mobile phones.

Overall approach

From past experience with the CSIRO cross-divisional software engineering initiative, one approach that could work for assisting government projects migrate to cloud is find, customise, package, and transfer skills from existing internal "pools of excellence":

First, find some projects/teams/applications that are actively involved in either writing application for the cloud or have migrated an application to the cloud. I.e. find pools of excellence.  Have a look to see what they did, document it, found out what worked and why, what they'd do differently next time, etc. Get them to act as advocates (e.g. run seminars, training, etc with them and other projects). Find projects/applications that are similar, and tailor the approach to the next projects and repeat. Keep on documenting and communicating experiences and develop best-practices which can be shared wider.

Government organisations that are more "scientific" such as CSIRO, BoM, and Geoscience are already using public cloud so would be a good starting point for finding examples.

I also complemented this by bringing in appropriate and sometimes tailored best-practices from out side the organisation (particularly around process, tools, organisational and training ideas, and occasional outside expertise, i.e. I didn't just rely on myself, either in terms of time and what I knew/didn't know).

Provide appropriate training, hands on experience, and maybe something like hackathons/meetups? Run seminars with a mixture of famous cloud people/topics and more experience/practical based case studies and best practices etc.  Get cross-fertilisation going between government, consulting, academic, professional and cloud vendor experts.   Are there unique problems with migrating or writing government applications on public cloud? If so, some of the academic, research and cloud vendors are probably willing to help. One odd thing I noticed at the recent AWS summit in Sydney was the predominance of developers, sysadmins, and "hackers" present (I guess this because no one except me was wearing anything approaching a suit and tie on the 1st day, an error I quickly corrected by the 2nd day). Where were all they executives, software architects, project managers etc?

Hands on technology evaluation could be part of this or a supplemental activity. E.g. Cloud bake-offs,  Hands-on evaluation of specific cloud technologies for ease of learning and suitability for government applications (including price and limitations), i.e. rapid prototyping and benchmarking of the main features with realistic (but possibly simplified synthetic requirements and data), possibly designed for evaluating ease of development, portability and cloud lock-in issues (e.g. are there many cross cloud platform evaluation frameworks or benchmarking applications?).  More general evaluation of how to do cloud architecting (how to select and evaluate multiple services to solve specific business and non-Functional problems), etc).


PS
One obvious problem with the notion of "migration to cloud" is that migration is typically bi-directional and seasonal. It's not a one off activity (unless you are Eaten by a Bear, see cartoon). So always think about the next migration (or what happens when you want to migrate back to somewhere even more pleasant, better climate, more resources, etc).


P2S
Coincidentally a report on data center vs cloud was mentioned on ZDNet today. They claim the "default" position has switched from data center to cloud (i.e. you have to have a good reason or a few even to stay with a data center. Main arguments are similar to my observations, cost isn't the main factor anymore, agility is (e.g. cloud-first applications, DevOps, microservices, etc).

http://www.zdnet.com/topic/the-cloud-v-data-center-decision/
Detailed report.

Comments

  1. Nice article regarding cloud migration impiger giving a solution for the business people if you are interested take some time to look at this page. https://www.impigertech.com/blog/public-vs-private-vs-hybrid-cloud/

    ReplyDelete
  2. Very interesting blog. Thanks for sharing this very valuable information. Keep Rocking.
    GCP Training Online
    Online GCP Training

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Good piece of work it contains all the matters with regards to the cloud migration as a service. Good luck to you and your well performed job.

    ReplyDelete
  5. I searched for public vs private vs hybrid cloud, Finally I discovered my needs.
    -click here

    ReplyDelete
  6. Interesting blog, easy to understand regarding cloud migration service providers. keep posting like this blog, it is really helpful.
    cloud migration services providers

    ReplyDelete
  7. Want to change your career in Selenium? Red Prism Group is one of the best training coaching for Selenium in Noida. Now start your career for Selenium Automation with Red Prism Group. Join training institute for selenium in noida.

    ReplyDelete

Post a Comment

Popular posts from this blog

Which Amazon Web Services are Interoperable?

AWS Certification glossary quiz: IAM

AWS SWF vs Lambda + step functions? Simple answer is use Lambda for all new applications.