The Accenture Cloud Platform Serverless Journey

Last week at AWS ReInvent, my team presented the architecture of the next generation of the Accenture Cloud Platform which uses a serverless approach.

We started this journey November of last year to solve a set of business challenges which are common to any product group in a large enterprise faced with a fast growing market disruption like cloud.

  1. Speed.  Given how quickly cloud services are evolving the challenge we faced was how to add support and management to these services at speed.  Our traditional approach meant falling further behind every day; AWS claims they are adding three major features a day.
  2. Budget. While cloud services are growing at very high rates, our engineering budget will not grow. We needed to have a way to disconnect cloud growth from cost growth. The only way to do that is to create a true platform that allows other  members of our ecosystem to serve themselves and others. Think of it as Tom Sawyer getting other kids to paint the fence, but in a win-win approach.
  3. Commercial alignment to market needs. It’s no secret that customers want everything as a service and to pay on a unit consumption basis. The challenge is the number “units” of measure is growing rapidly.  Serverless allows us to dial the unit of consumption all the way down to a specific customer function, like “update CMDB” or “put a server to sleep”.  And it’s done at a prices that are fractions of a penny.

As I said to address the speed and budget challenges, we created an architecture that provides room for third-parties to add features and interfaces without needing our team to be involved.  We defined three speeds of delivery and created rings. In ring zero we deploy code multiple times per day and it’s where core multi-sided, multi-tenant platform lives.

In ring one, we bring features out on a monthly basis.  A different engineering creates the standard features and functions users of ACP “see”.

And in ring two is where client projects live. ACP is a platform that can be extended, integrated and tailored for specific customer uses. Besides generic cloud management, it’s always necessary to stich together both new and existing operational models. Roles, access, systems, services, etc all have to be integrated into one control plane to effectively use cloud at scale. Most one-app startups stitch their own; enterprises have literally thousands of apps so we offer our own ready-to-go platform, delivered as a service.

These projects have their own time frames. It’s here that our ACP serverless architecture delivers the greatest value. If a client needs a new service x managed, a simple guide and set of API’s enables our consultants to extend the platform in a secure, stable and supportable manner.

To be clear, ACP is a large platform. There are components that are traditional three-tier apps and others are commercial products.  Our journey is about the components we build to make ACP be a platform.

I suggest you watch the video.  My colleague Tom does a brilliant job

Accenture Cloud Platform Serverless Journey

Where Did Your Machine Learning Algorithm Go to School?

The Russian Machine

One of the hottest areas today is Machine Learning (ML).  It will do magic. Cure cancer (literally) and free humanity from drudgery. Which is so naive. Humans have infinity capacity make a mess of things. One day the machines will learn that too.

Recently I’ve been shown some product demos claiming to use ML to solve some aspect of  rather intractable problems. Problems usually worked on by skilled architects or specialized experts in their field.  In both of these cases, the (different) presenters claimed that through ML, it would a) define and architect a system, and in another improve a business process.

I was intrigued and incredulous. I could see some automation, but solving the whole thing?

I did not believe it. But when something is hot and new, we often lack the language, confidence and assessment framework to refute or judge the claims being made. I politely asked a few questions but I felt a bit uncomfortable.  I kind of stewed for a few days.

Finally, I arrived at some questions and way of thinking that will help me have a more detailed dialogue with people making claims based on ML. Here it is:

Schooling. Where did your ML algorithm go to school? Who were the teachers?

Curriculum. If the machine is to learn, it needs data. Lots of data. How much data was used to teach the machine? What was the quality & provenance of the data?  Is it sufficient data for learning to happen? What biases are built into the data sets.

Graduation. What exams did the ML algorithms pass? What grades did they get?

Work Experience. Where and when did they do their internships? What projects and results have been produced?

This framework may seem humorous (I hope) but it’s also useful.

The ML algorithms are only as good as the teachers they have. And for now, all the teachers are human. The ML will only be as good as the teachers.

ML requires a sufficiently large amount of data on which it can learn. This data is actually hard to get!  In the two cases above, there are no data sets that can be bought or used, so it struck me that the ML algorithm would deliver trivial advice, very biased to the one or two experiences encoded. Not enough data to learn anything. And the definition of sufficiently large will vary by problem.

On graduation, the notion of exams to pass your class are the equivalent of Quality Assurance in software development. How do we know someone knows something? We test them.

On work experience, same thing.  Nice that you studied and passed your exam. But work experience is how we know someone actually ‘knows’ how to do something.

In summary, these are the questions I will be asking about ML going forward.

Remember, my opinions only represent my opinions.

Cloud Security Concerns: Welcome to AdventureLand

Now Leaving Adventureland

On a recent visit with senior executive customers in Europe, they expressed concerns about cloud security.  Which is similar to concerns American CIO’s have about cloud.

But they mean something different. For European CIO’s, in addition to the standard concerns American CIO’s share, there’s the “other” security concern: spying by American intelligence agencies.

This fear is a real and grounded. The Snowden revelations of spying with the seeming cooperation of certain internet services, spying on allies politicians, plus the recent revelations about Yahoo’s indiscriminate handling of email do not help. In fact, the FBI, NSA and CIA have done more to support the datacenter business of national telcos than anything those companies could do to compete with the hyper scale clouds..

How do I convince them that in fact cloud is safer than a private data center? As an American?

I don’t.

I tell them that they should assume the NSA, CIA, FBI etc are trying to hack them. As are the Russians, Chinese, North Koreans, and every major intelligence agency in the world.  I tell them we have gone from a world of individual hackers to criminal networks to now nation states.

I ask them how many security people do they have and their level of experience? The answer is usually not enough and not enough. Not enough skills, budget or focus.

How do their capabilities compare to the capabilities of major cloud providers in terms of numbers and expertise? Expertise honed over 20 years of fighting hackers for whom Google, Amazon, Microsoft would be …. the apotheosis of hacking.

Fact is that security in the cloud is a shared responsibility, but one where the underlying service provider brings a skills, experience and scale that most enterprises don’t have in house.

This is why cloud is more secure than a private data center– very few enterprises have the skills, budget to protect against a superpower trying to hack your datacenter.

 

Startup Choices Rapidly Becoming Enterprise Choices

IMG_1362//embedr.flickr.com/assets/client-code.js

As I wrote on “The Three Freedoms of Cloud“, cloud is all about agility and speed.  A couple of stories both illustrate and enlarge this point through the adoption of serverless architectures

A Startup Volunteers to Be the Canary in the Code Mine

The first story comes from a startup called Siftr (You know they are a startup because they could only afford one vowel).

The post is on the value of serverless as experienced by a small startup. They chose Google App Engine for their application.  Here’s the relevant quote.

An enterprise, with practically infinite resources at disposal, can afford to be selective about every single component that goes into their tech stack and can recreate the entire wheel. However, for startups, time is gold! Their choice of technology should help them maximize their number of ‘build-&-measure’ iterations and yet should never hinder the product design.

Source: Why choose Google AppEngine for starting-up — Technology @ Siftr — Medium

As a past startup guy, I completely lived this reality.

When I started newScale, we faced a choice of what to use to develop our product. Only three engineers, a short runway of cash, and lack of clarity about our market.  Job one has to be product / market fit.

At the time Java was not really an economic or performant choice ($300K for an app server was the quote from WebLogic; I didn’t see the logic of buying).  C++ (we came from that world) was too complicated, slow and lacked portability, and Perl and those type of languages were unknown to us.

So we chose ColdFusion.

Over the years, I got so much grief for that choice and five years later we re-wrote the app (at great cost) in Java.  So was it the wrong choice?  NO!  That choice allowed to build a rich, enterprise level product, acquire major enterprise clients which let us raise $26M over two rounds.

It was thanks to that initial success that we could afford to re-write it

(New) Reality Arrives to the Enterprise: Fast Eats Big

Market advantages have become transient. The time in which an enterprise could build an effective wall against competitors is fast fading.  Enterprises have responded by beginning to adopt the tooling, practices, and culture of start ups to remain competitive

My friend James Staten, Chief Strategist for Cloud at Microsoft Azure, has some useful research to share regarding the value of moving to PaaS.

Sure, moving your applications to the cloud and adopting more agile DevOps methods will save you money and make your complex application deployments more efficient but if you want the really big gains, focus the majority of your developers back on coding. According to Forrester Consulting, the gains from doing so are massive.

PaaS value chart

Yes, re-hosting or even building cloud-native apps in VMs and containers will give you big gains in agility and cost savings but asking your developers to manage configurations, update the OS and middleware and pick the right virtual resources to support their apps (let alone manage the security of their deployments) is a distraction that cuts into their productivity.

And shares some pretty dramatic numbers about the economic impact.

How much, you ask? In its latest Total Economic Impact Study, Forrester Consulting interviewed a number of current customers of Azure’s PaaS services and concluded that concluded that migrating to PaaS from IaaS resulted in a 466% return on investment. For customers migrating from on-premises environments to PaaS, the return on investment can be even greater. Time to market also improved by as much as fifty percent, because of the efficiency and speed of deploying applications with PaaS services.

James works for Azure but I believe these type of results apply to serverless architectures as a whole. And to be clear, these savings do require the adoption of new operating models such as DevOps. Sticking to waterfall and ITIL won’t get you there.

Also Amazon Web Services

And not to leave Amazon Web Services out, in our Accenture Cloud Platform, we are using Lambda for certain  components.  Our entire system is composed of COTS (we don’t owncode), homegrown components written in Scala, web services from cloud vendors, and now new components written in Python using Lambda.  In other words, not atypical from what you’ll see in a large enterprise that has history and heritage in their infrastructure. Heritage sounds better than legacy.

The value we get is similar to what others are finding:  savings in time to market, flexibility and agility.  And improved costs and security in our case.   Yes, you have to live within the constraints of the underlying PaaS, but that discipline keeps us focused on building value.

But What if It’s the Wrong Choice?

There’s a fear in using these tools that they could be the wrong choice. Well, it could be and eventually it has to be re-platformed, but you’ll either know it quick enough to make a change, or it will be a long ways down the road after you

But if it helped achieve product market fit, time to market, validate market thesis and / or deliver competitive advantage, the fair answer should be: WHO CARES? The business objective has been achieved.

Also, the cost of re-writes and re-platforming is not what it used to be. The cost of constructing new apps has fallen to an all time low thanks to the availability of tools, open source, and web services. In many cases, it will be faster to re-write it than to fix it.

Time to be Rigourous. How to Ruin a Cloud Implementation: Leave Self-Service for Last

I’m republishing this from my old blog. I wrote this in 2011, yet I was having this conversation today.  I realized the move to as-a-Service still requires a methodology. 

Self-service should not be an after-thought in cloud projects, it’s the beginning of the journey.  Self-service drives the standardization of offerings and reduces the labor costs that arise from designing, specifying, ordering, procuring, and configuring computing, storage and network resources on custom basis.

This standardization and automation also applies to application components, security and network services such as LDAP, DNS, load balancers, etc.  I say “drives” because the moment an organization decides to provide infrastructure on demand, three questions arise that are at the heart of the beginning of the cloud journey:

  1. Who are my customers?
  2. What are my offerings?
  3. What are my existing delivery processes?

Like any other business, the question of who are we serving, what do they want to buy, and how do we deliver,  leads to many other questions. These questions are at the beginning of journey to the cloud operating model.

But, haven’t we answered these questions before when we built our ITSM catalog?

Answer: NOT with rigor required by a self-service and automation program.

Once we decide to offer a cloud service, these questions need to be answered with a BRUTAL and RIGOROUS specificity to be codified into self-service choices.  Usually, until the decision to   deliver cloud services, these “customer” catalogs are often vague, warmed-over recapitulation of some well-intentioned static “service-catalog.”

In my experience, very little of that prior work is usable when confronted with the needs of the cloud project. I’m loathe to say it’s useless; sometimes the project team that did the prior catalog is actually in the best position to understand the short comings of the prior work and can be a good contributor to the cloud project.

Getting back on point, once the decision to offer infrastructure as a service to the user community, At this point, the cloud teams faces three tasks:

FIrst, define the customer as a consumer of a set of offerings that address some activity the customer needs to carry out. For example, “test a patch to the scheduling system.”  The team needs to figure out the proper metaphors, abstractions, rules for configuration, rules for consumption, and the choices allowed the customer. And this needs to be in the language and domain of competence of the customer.

This is hard for us technical people. We like lots of choices, we understand a lot of things our users won’t.  The exercise of understanding the customer requires a lot of trial and error.

Sometimes I see some “cloud interfaces” that are really admin consoles to configure compute, network and storage resources. The UI’s are targeted at unicorn users (they don’t exist)  — because developers, the real target of cloud– are usually not network experts, or other than disk space, they don’t know much about storage. Thus this is a unicorn customer–she doesn’t exist!

Second, the cloud team now needs to break down the end to end service delivery processes into it’s component processes, specifying all the hand-offs and activities, the tools and systems used, and the data required to both execute activities and make decisions.

This is where standardized offerings become the difference between success and abject failure as they simplify decisions and data gathering.

If every car is a Model-T, then manufacturing, supply chain, capacity management, procurement, planning are all much easier.  if you have a large variety of options, it’s much harder.  Start small. Amazon web services first compute offering was small linux. That’s it. “sm1.linus” (the original name) is the Model T of the cloud era.

Third, a good gap analyis and coverage plan is needed.   What we tend to find at this stage of cloud design is a gospel of gaps: rules in people’s heads, governance rules that get in the way (hello CAB!), existing systems built for responding in days and weeks rather than minutes and seconds.

Also there are missing systems. Sources of record are unstructured (like a word document or wiki’s) rather than a database or a structured data model.  The few tools that make it lack API’s, were built for a single tenant, do not enforce role-base-access control, or were not designed for consumer use.

Good process design needs to inventory these system and data gaps.

For example, take the task “Assign IP” address. Today, Jason the network admin gets an e-mail, and he opens his spreadsheet and sends it to someone who then assigns it to the virtual machine. Now, we need to enable the user to assign an IP address on self-service basis. So no Jason, no spreadsheet, no manual steps. But we do need to say Yes to a IP Addreses Manager, portlet, lifecycle manager, consumption and reclaim process.

This one example.  If you don’t have simple standards upfront, the number of rules and interactions reaches a complexity where it just doesn’t work and it’s a maintenance nightmare.

Sounds daunting, doesn’t it? Well it is, if the cloud team doesn’t do the proper gap analysis upfront. When they do, it’s just a project like any other.

Don’t Spam Your Timeline

I Hate Getting SPAM in My Mailbox!

For some reason, people spam their own timelines with updates from other apps or services.  One of the worst on Twitter is the “How I did on Twitter”-bot, followed by SWARM, Klout, etc.

Why is this annoying?

For the same reason robo-calls are annoying. You are optimizing my time, but consuming my real life time.

It is a fundamentally disrespectful action. I need to consume real-life time, a very finite commodity, to enable you to optimize your life time.

I don’t expect every thing on Twitter to be smart or relevant. I expect it to be human — with all the smarts, foibles, insights, stupidities, etc — like my own.

I DO expect people to post pictures of food, drinks on weekends. I don’t expect link to white papers. I don’t necessarily care for a picture of a beer on Monday morning, although I respect your life is awesome. But I appreciate a link to white paper on Monday morning.

But not the bots.

Thoughts on Cloud Brokerage

I wrote this post the Summer of ’13 in my old blog.  Bringing it here as it is still relevant in the Fall of ’15.

My thinking has matured, evolved and added. But before I can write that, I need to bring this back.

Service Catalog as a Service Broker. Putting Your aaS to Work

Recently I’ve been involved in a number of conversation around the relationship between service brokers, service definitions, the service catalog and the design of service interfaces.  I’ve encountered lot confusion and wrong assumptions, which have the potential of costing a lot of money.

So as a way to clear up my thinking, I’m going to note a few thoughts on this today. It’s not a finished piece by any means. Wear a hard hat while reading it; pieces will fall.

Let me start by saying I’m vexed by the phrase “Service Broker”. I’m often vexed by people spray painting new words on existing concepts.

One notion is that a service broker is the control plane and panel to obtain on-line services from external vendors. Which is fine, but this is also what a good, modern, non-service desk restricted service catalog provides. I have covered this topic in my last 837 blog posts.

The second meaning for a service broker emphasizes the word “broker”. The broker enables the customer to aggregate and arbitrage services to enable the selection of the service that is cheapest, best, compliant,  name-your-favorite-characteristic.

A common example used by proponents of “brokering” is for the contracting of compute resources, where we may want to decide where to place a workload based on price. After all, a Gigahertz is a Gigahertz everywhere, right. Hmm well, no. The network, storage, location, distance, latency, iOPS, liability protection, certifications, applicable laws and many other factors (Will someone pick up the phone? Is there a number to call?) matter.

I don’t believe there’s any commodification of infrastructure services coming any time soon (like the next ten years). There are just too many facets to internet services such as quality, latency, richness of offerings, laws, SLA’s, data sovereignty and others that prevent making effective like-to-like comparisons on the fly. We will need procurement specialists to do these type of comparative evaluations.

Also, if you drink coffee, you’ll know that coffee, which used to be a commodity, is anything but that today.  Coffee is a lot simpler to acquire than datacenter services. I’ll have my server with half-caf, soy-latte with a twist of lemon.

But even if we could flatten service definitions so they were more comparable (they can be flattened — sourcing specialists do it all the time), the acquisition process still doesn’t match the way the enterprise procures today.

Enterprises engage on two classes of spend: contract and spot. The majority of spend is contract. Why? Risk, pricing, security, control, quality, availability and convenience.

By the way, that’s why enterprises use e-procurement and constantly try to reduce the number of vendors under management.  It’s easier to control a smaller number of providers that are a large part of the enterprise spend, than thousands of small vendors to whom the enterprise’s spend is not meaningful.

For example, take the issue of liability: AWS assumes very limited liability for anything.  Most enterprise contracts have extensive sections around what happens when things fall apart and the network operations center cannot hold.

In my experience reviewing these type of contracts, these type of documents spend a fair amount of time defining the “service” – and it’s not an API call, but it’s the request process, approvals, support, training, incident management, remediation, service level that define the “service”.

By the way, I don’t meant to imply these contracts are actually workable or actionable –most are not– just that a lot of effort goes into creating them to try to define the “service.”

I once spent a week with a team of 15 people trying to convert a massive outsourcing contract into a service catalog. Turns out to be surprisingly easy to do it with 2 people, but impossible with 15.

Two recent examples help make the case for contract buying.  One, Amazon, who does offer compute pricing on a hourly basis, now offers one and three year leases. Why? By the hour pricing is like by the hour hotels, very expensive if you plan to live there a year.You are better off with a lease.

Second, the CIA announced a $600M contract with Amazon to build a cloud.

Well, they are the CIA, someone might say. Of course they need something more secure.  To which I’d say, baby, today we are all the CIA.

Also, if you read the drivers for establishing strategic sourcing and procurement processes, you’ll find a  analogous use cases to brokering: to stop maverick buying (we call it rogue in cloud), to reduce costs, implement governance, reduce or eliminate manual processes, apply consistent controls, rationalize the supplier base.

So it does seem like the service broker concept staples a service catalog to a procurement system; but will it cross the cloud?

As for the idea that somehow an intermediary can establish a market place of services and “broker” between enterprise and service provider, I don’t buy it – pun fully intended.

This approach did not work in the dot-bomb era with e-marketplaces. Turns out the buying power of large companies is significant, the internet flattens information brokerage and margins are way too thin for this intermediaries to earn much money.

As for brokerages serving  small and medium size businesses, I’d say yeah, sure. They are called resellers, like PC Connection, or TigerDirect, or yes, Amazon retail.  This is not a new development.

In summary, there are considerable transaction costs that need to be accounted for in the notion of service brokers. In the service catalog world we talk about policy, governance and automation as the way to get those contract terms implemented. In fact, most enterprise buying is contract buying and not spot buying.

I’ve argued that a service catalog already provides the functionality of a service broker and that there’s unlikely to be a marketplace of services that is profitable or that enterprises will change buying behaviors in the next ten years.

So is there anything new about this service broker concept that is worth considering? And the answer is YES.  The advent of service interfaces aka API’s opens a new service channel.

So for the first time we have the opportunity to design services that are born-automated. How do we think about them? What are their characteristics?

That is worth exploring in its own blog post.

As I said at the beginning, these are my preliminary thoughts. Comments? Questions? Observations? Organic, heirloom tomatoes?