AWS Sydney Summit 2017 slides

Just found out that the AWS Sydney Summit 2017 slides are now available online at: https://aws.amazon.com/summits/sydney/on-demand-17

AWS CTO Werner Vogels gave a keynote which was as interesting as usual

He reported the current rate of innovation of AWS in terms of new features and services (not just services note) per year as a "Good thing" as you will increasingly be able to find exactly what you want in the AWS stack (they really do listen to customers, they are often aware of the need for many new services or features in advance but only decide if/how/when to introduce them based on real customer need/demand, cool. Whoops chopped the dates off the bottom of the graph, they run from 2008-2016).



Now, if you look at these numbers something is obvious. They are getting larger faster. What sort of growth rate is this? Something scary. If you do the maths and assume even only linear growth (which this isn't) then AWS will have around 5000 features and services in 10 years (2026), and a polynomial function with a good fit predicts 40,000 features and services by 2026 (Note that on the above graph these values are new features and services per year not the total, on my graph below I use totals).


Now maybe it's just me but I find this cause for some concern. Even though it's not exponential growth (luckily, as unbounded exponential growth is not Physically sustainable, e.g. grains of rice on chessboard fable). The problem is one of complexity in general. Given that there are now approximately 3000 AWS features and services this is a reasonably large number of things to learn, use, and combine. In 10 years time it will be impossible for any individual to understand. This implies fairly serious need to manage complexity for AWS somehow.  I suggest something along the lines of abstraction, tool support, automation, etc. For example, architectural level tool support which can dynamically discover and combine features and services correctly, is aware of prices and limits, can automatically check the architecture for problems (e.g. only allow services to be combined if they are interoperable), build models and run simulations which include workloads that predict price and if limitations are an issue (and sensitivity analysis to understand where bottlenecks and/or areas for improvement and alternatives to compared) etc. To be useable this would need to integrate automatically with DevOps so as be up to date all the time (architecture models built once and not maintained get out of sync very reality immediately), and be connected to architectural service dependency level discovery and monitoring as well as end-to-end/top-to-bottom run-time monitoring.

Note that this corresponds to my recent analysis of AWS service complexity although I probably underestimated things as I didn't include features.

Of course it's possible AWS will retire some services as they age, and that there just many not be this many services and features that anyone can think up (so the rate of growth will slow and plateau eventually perhaps). But there will still be a lot of services by anyones counting.

Comments

Post a Comment

Popular posts from this blog

Which Amazon Web Services are Interoperable?

AWS Certification glossary quiz: IAM

AWS SWF vs Lambda + step functions? Simple answer is use Lambda for all new applications.