Cloud blogs that caught my eye April 26, 2022

There’s a lot of interesting cloud articles out there. These caught my eye and I shared with colleagues.

AWS Migration Hub Orchestrator – New Migration Orchestration Capability with Customizable Workflow Templates – Migration is a big focus for all hyperscalers, and they continue to invest in delivering tools to simplify and reduce costs. These latest release from AWS brings standardized workflow patterns to remove manual steps in large scale apps.

I like this case study about Expedia’s platform; it very much mirrors what other clients will face. “Expedia Group has over 200+ travel sites in 70+ countries that offer 5+ million properties, as well as 500+ Airlines. To support this, Expedia Group runs over 9,000 applications managed by 650 engineering teams across 400+ AWS accounts that utilize tens of database technologies. Each of these accounts has different automation levels to provision and manage data infrastructure.” How Expedia Group built Database as a Service (DBaaS) offering using AWS Service Catalog

You might want to subscribe to the Azure Cost Management page. It’s a great way to keep up with the latest techniques for optimization and to learn all the changes in Azure that impact costs or opportunities for cost savings. Or even easier, follow on Twitter as I do.

Finally, if you are network security curious, and who isn’t, this sort of primer to the fundamentals of AWS network security is good.  A complicated area made a little easier by a well written introduction.

The new cloud primitives: eventing

The new primitives of cloud: “Cloud Run for Anthos brings eventing

Reading the examples in the blog post for how to use of these types of cloud primitive isat once both exciting at terrifying. Exciting because the decoupled architectures it enables. Terrified of events run amok that are impossible to debug; a new crazed monolith.

What does this mean? New operating models coupled with new tools to make event streams visible. And here culture, skills, organization, cost, security will be the competitive advantage. Again.

Architecture flexpoints

I was reading this fascinating post about how Azure handled the increased demand brought about the pandemic. 100% growth in demand created quite a challenge and  Microsoft had to respond across a range of teams to deliver capacity to Azure customers.

An example of the kind of actions taken.

Microsoft turned off the little “your co-worker is typing” and read receipt notifications for Teams users in peak-demand regions, reducing the CPU capacity required to process those functions by 30% and returning that capacity to Azure customers.

This brought about a thought about architecture “Flexpoints”. Don’t worry, Flexpoints are not a DevOps concept; just a term that I’m borrowing from math to explain the current phenomenon of having to drastically scale up and scale down capacity that the Covid 19 pandemic of 2020 brought about.

We’ve seen demand drop dramatically for some services, like reservation systems of hotels, planes, concert, restaurants.  We’ve seen others torque up torrentially (video, collaboration, health care).

This reminded me of a session I attended about ten years ago on Facebook scaling.  Then, the use case was Facebook introducing of a major new service (personal URL’s I believe). The service would launch at a certain hour of the day and people could claim their own URL; a mad scramble that could overwhelm their service. A solution was to screenshot the page when the sign up dialog came up so the user would see the page, but it was not the live page behind the dialog. So the user experience was the same but all the underlying microservices that were called by that page were freed up.

My takeaway is that today we need to architect, build and understand our the flexpoints in our applications to deal with these sudden spikes and drops in capacity. By flexpoint, I mean the features or functions that can be turned off gracefully and quietly to free up resources when capacity is constrained.  And I’m defining capacity here as both resource and financial.  Whether it’s Azure demand doubling overnight (resources) or demand dropping 90% overnight (financial).

Here’s another example of financial. In my role, we provide an advanced cloud resource discovery service that is built entirely on a serverless architecture. Not a single VM is used, rather a variety of PaaS, Database and security services are consumed. It’s fast and discovers thousands of accounts to maintain cloud inventory for our organization.

As the estate has grown and the security requirements increased, I saw costs also continue to increase.  So discussing with the app architects, we asked questions about how to reduce costs without impacting service.  Because the discovery was scheduled to run every 15 minutes, launching thousands of lambda functions and containers, the obvious flexpoint was to change the discovery scheduling.

The team moved to run discovery every 30 minutes.  The result? Costs dropped by 36%.  Changing discovery to every 120 minutes would probably half that cost again.  That’s an example of a financial flexpoint.

We will continue to see volatility in our markets and our society, when 100 year floods happen every year.  We should have an understanding of our architectural flexpoints to respond to these black swan events.

Enjoy

PS: The actual Microsoft blog post with the fuller but drier details.

Product Manager? Product Owner? Service Owner?

In recent weeks, I have been in conversations with people inside as well as outside the company about how to effect cloud-enabled transformation.

There are new certitudes in play. “We must be cloud native,” “We are going to be agile” (maybe lean as well?), “We aspire to deliver faster using continuous delivery/ continuous integration”, and of course, “We’ll treat infrastructure as code.”

I think I may need to include DevOps or DevSecOps — but out of sheer perversity, let me add Finance to it and call it DevSecFinOps.

The small fly in the ointment is, well, a bunch of silos, lack of skills and no clear roadmap on how to get the as-is model to the to-be model.  Often the tech stack, the architecture and the development model are the easy bits; there’s lots of blogs, articles, training, etc that can take a small team and enable them in the new technology and practices.

One can hire experts, but also provide training in the new stack to the engineers, and one can get going. Add some Agile training or SAFE if that’s your environment (People argue for and against SAFE — not the subject of this blog).

All of this can get an organization started on the road to cloud in some shape.

But I have a question, who is the product owner? Or who is the product manager? I wrote about this role back in 2011 and it’s still astounds me that enterprises don’t have a ready answer. My friend Wayne Greene has a good blog and book on the product management topic.

And yet the role is unclear.  I think the product owner role in Agile needs to be made more explicit.  My question starts with: how much power does the role have? Can they change requirements? Push launch dates? Do they own the success of the product? To use an old adage, are they the CEO of the product?

I have owned the role in small software companies, in my own startup as CEO, CTO,  and in some very, very large companies. And while it varies, the ultimate responsibility of the role is to drive the success of the business through product innovation aligned with the company strategy and the sales channel.

Product managers should not have to be empowered; they must have the power to drive the product. If your digital transformation or org model does not have the role and the functions it manages in place, you should treat as a major risk to the effort.

Why? Two reasons: one, your ability to respond to market changes and coordinate all the necessary functions will be compromised, and two, you won’t be able to recruit the talent you need to drive the business.  The professionals that can drive the business will realize it’s a losing battle and avoid the company; the job will be seen as a loss of stature.

These are not my final thoughts on the topic but my observations based on how difficult it’s to define the role of product owner / product manager compared to other roles in the agile transformation.

 

You want cloud native apps? You may need to lift (and shift)

Lift!//embedr.flickr.com/assets/client-code.js

Last week, at Google Next, there was a lot of talk about Apps built on the wonderful new abstractions and platforms cloud providers and PaaS providers are bringing to market.

These apps are sometimes called cloud native.  And in 2017, it’s the way to engineer new apps.

But what about the existing apps? The other 99% of the portfolio?  The answer for those is to migrate them, as-is, to the new cloud platform in the least disruptive way possible.

This “lift & shift” is sometimes dismissed and / or belittled by some of my cloud friends as not being real cloud.  They are correct. It’s not “real” cloud –whatever purity test that entails.

These apps won’t scale out or failover to a new region. In fact, the old apps running in the new cloud bring in all the old management processes such as ITIL into the new cloud environment.  The question is why even bother?

My answer: it’s a necessary transitional step towards cloud adopti0n.  Here’s why:

  • You can’t re-write every app in a large enterprise in any cost effective or time relevant way.  So dispositions and priorities will have to be made and much will have to be migrated as-is.
  • Apps travel in packs. If you have ever seen an ERP diagram of all the systems it touches, it becomes pretty clear the “APP” is in fact a composite of many sub-systems. You can’t just move one component; you must move all the ones which have to be together in one wave.  This is something we do at very large scale day in day out; so I see it in practice.
  • Many enterprise apps are actually commercial off-the-shelf software  (COTS)products. In fact the core ERP systems are all COTS. The enterprise has neither the code nor the expertise to re-write these app and the vendor may have no plans to offer it as SaaS.
  • For every app, a disposition has to be made. Re-write? Remediate? Modernize? Lift-and-shift? Remove? Leave as-is until end of utility?  The application portfolio view and the dependencies between components is a key piece of work to asses readiness to move to cloud.

The second bullet point is one of the main reasons to migrate (lift and shift) workloads to the cloud. New, cloud native apps that extend or replace components of the older app can more easily be considered once the old apps are in their new cloud hosting environment. From security to networking to assurance, all these processes can now be carried out in the new cloud thus spreading a set of fixed costs  over a larger set of apps.

Wither the datacenter?

Once an organization has committed to adopt public cloud, a mental clock starts ticking; what is going to happen to the datacenter?  Keep it, and for what? Renew the lease? Close it?

And if half the apps go to the cloud but half remain, will the application owners be happy if their hosting costs double? After all with fewer  tenants, the higher the allocation rent.

The reality is no app owner wants to be left holding the last app in the data center. This starts the planning cycle to shut down a datacenter.

You must vacate the premises by a certain date in order to free up the capital or opex and move to a cloud.

That’s the forcing function: avoiding a data center lease renewal or avoiding building one, and freeing up capital.  Hence the need for lift-and-shift.

People who are planning large migrations to cloud are aware that this doesn’t really change the quality of the service or make the app better, initially

But they know they will be in an environment with a richer set of services. Which sets the stage for app modernization.

A real case study: Accenture Cloud Platform evolution

My own area is a microcosm of what I just wrote.  I’m responsible for a software product offering called Accenture Cloud Platform. We architect, engineer and deliver a complete cloud management platform as a service, billed ion a consumption model. We offer a range of services, from cloud procurement, brokerage, cost analytics, managed billing, workflow automation and many other managed services.

In early 2014 we stirred, shook, and moved our apps from an internal data center to the cloud.  Our cloud management offering is made up of many underlying software components: three tier components (web, app servers, databases), open source components, COTS components (including one that requires a Windows Server), plus a variety of tools like back up servers, monitoring, patching, antivirus, application performance managers, user tracking, and more. An enterprise cioppino of software.

This whole stack si dedicated to delivering great cloud management services for our clients.  Step one in going to cloud? We lifted and shifted the entire stack to the cloud.

Once we were stable and comfortable in the new environment, we started modernizing.  No more SQL but now use SQL as service. No more load balancers, we adopted native DNS, and the native data warehouse.

At the end of 2015, we faced the need to build a whole new discovery and policy component for our offering. From managing thousands of VM’s, the next challenge requires us to manage billions of items.

We chose to go all in serverless for this new component; it would sit along side the older components (see prior post).  This May we will deliver brand new functionality for our offering; this time using a modern web-native stack.

This is why lift and shift is strategic. Once one can operate in the new environment, one has  the option to modernize and incrementally re-write components while adding value to existing operations.

In theory, one could execute this strategy without a massive lift and shift. In practice the cost of network modernization in the datacenter, the complexity of security design, and the need to modernize the old apps in the old infrastructure to make them “sufficient” to work with the new apps make this scenario unrealistic. Better to lift and shift.

 

 

The Accenture Cloud Platform Serverless Journey

Last week at AWS ReInvent, my team presented the architecture of the next generation of the Accenture Cloud Platform which uses a serverless approach.

We started this journey November of last year to solve a set of business challenges which are common to any product group in a large enterprise faced with a fast growing market disruption like cloud.

  1. Speed.  Given how quickly cloud services are evolving the challenge we faced was how to add support and management to these services at speed.  Our traditional approach meant falling further behind every day; AWS claims they are adding three major features a day.
  2. Budget. While cloud services are growing at very high rates, our engineering budget will not grow. We needed to have a way to disconnect cloud growth from cost growth. The only way to do that is to create a true platform that allows other  members of our ecosystem to serve themselves and others. Think of it as Tom Sawyer getting other kids to paint the fence, but in a win-win approach.
  3. Commercial alignment to market needs. It’s no secret that customers want everything as a service and to pay on a unit consumption basis. The challenge is the number “units” of measure is growing rapidly.  Serverless allows us to dial the unit of consumption all the way down to a specific customer function, like “update CMDB” or “put a server to sleep”.  And it’s done at a prices that are fractions of a penny.

As I said to address the speed and budget challenges, we created an architecture that provides room for third-parties to add features and interfaces without needing our team to be involved.  We defined three speeds of delivery and created rings. In ring zero we deploy code multiple times per day and it’s where core multi-sided, multi-tenant platform lives.

In ring one, we bring features out on a monthly basis.  A different engineering creates the standard features and functions users of ACP “see”.

And in ring two is where client projects live. ACP is a platform that can be extended, integrated and tailored for specific customer uses. Besides generic cloud management, it’s always necessary to stich together both new and existing operational models. Roles, access, systems, services, etc all have to be integrated into one control plane to effectively use cloud at scale. Most one-app startups stitch their own; enterprises have literally thousands of apps so we offer our own ready-to-go platform, delivered as a service.

These projects have their own time frames. It’s here that our ACP serverless architecture delivers the greatest value. If a client needs a new service x managed, a simple guide and set of API’s enables our consultants to extend the platform in a secure, stable and supportable manner.

To be clear, ACP is a large platform. There are components that are traditional three-tier apps and others are commercial products.  Our journey is about the components we build to make ACP be a platform.

I suggest you watch the video.  My colleague Tom does a brilliant job

Accenture Cloud Platform Serverless Journey

Where Did Your Machine Learning Algorithm Go to School?

The Russian Machine

One of the hottest areas today is Machine Learning (ML).  It will do magic. Cure cancer (literally) and free humanity from drudgery. Which is so naive. Humans have infinity capacity make a mess of things. One day the machines will learn that too.

Recently I’ve been shown some product demos claiming to use ML to solve some aspect of  rather intractable problems. Problems usually worked on by skilled architects or specialized experts in their field.  In both of these cases, the (different) presenters claimed that through ML, it would a) define and architect a system, and in another improve a business process.

I was intrigued and incredulous. I could see some automation, but solving the whole thing?

I did not believe it. But when something is hot and new, we often lack the language, confidence and assessment framework to refute or judge the claims being made. I politely asked a few questions but I felt a bit uncomfortable.  I kind of stewed for a few days.

Finally, I arrived at some questions and way of thinking that will help me have a more detailed dialogue with people making claims based on ML. Here it is:

Schooling. Where did your ML algorithm go to school? Who were the teachers?

Curriculum. If the machine is to learn, it needs data. Lots of data. How much data was used to teach the machine? What was the quality & provenance of the data?  Is it sufficient data for learning to happen? What biases are built into the data sets.

Graduation. What exams did the ML algorithms pass? What grades did they get?

Work Experience. Where and when did they do their internships? What projects and results have been produced?

This framework may seem humorous (I hope) but it’s also useful.

The ML algorithms are only as good as the teachers they have. And for now, all the teachers are human. The ML will only be as good as the teachers.

ML requires a sufficiently large amount of data on which it can learn. This data is actually hard to get!  In the two cases above, there are no data sets that can be bought or used, so it struck me that the ML algorithm would deliver trivial advice, very biased to the one or two experiences encoded. Not enough data to learn anything. And the definition of sufficiently large will vary by problem.

On graduation, the notion of exams to pass your class are the equivalent of Quality Assurance in software development. How do we know someone knows something? We test them.

On work experience, same thing.  Nice that you studied and passed your exam. But work experience is how we know someone actually ‘knows’ how to do something.

In summary, these are the questions I will be asking about ML going forward.

Remember, my opinions only represent my opinions.

Cloud Security Concerns: Welcome to AdventureLand

Now Leaving Adventureland

On a recent visit with senior executive customers in Europe, they expressed concerns about cloud security.  Which is similar to concerns American CIO’s have about cloud.

But they mean something different. For European CIO’s, in addition to the standard concerns American CIO’s share, there’s the “other” security concern: spying by American intelligence agencies.

This fear is a real and grounded. The Snowden revelations of spying with the seeming cooperation of certain internet services, spying on allies politicians, plus the recent revelations about Yahoo’s indiscriminate handling of email do not help. In fact, the FBI, NSA and CIA have done more to support the datacenter business of national telcos than anything those companies could do to compete with the hyper scale clouds..

How do I convince them that in fact cloud is safer than a private data center? As an American?

I don’t.

I tell them that they should assume the NSA, CIA, FBI etc are trying to hack them. As are the Russians, Chinese, North Koreans, and every major intelligence agency in the world.  I tell them we have gone from a world of individual hackers to criminal networks to now nation states.

I ask them how many security people do they have and their level of experience? The answer is usually not enough and not enough. Not enough skills, budget or focus.

How do their capabilities compare to the capabilities of major cloud providers in terms of numbers and expertise? Expertise honed over 20 years of fighting hackers for whom Google, Amazon, Microsoft would be …. the apotheosis of hacking.

Fact is that security in the cloud is a shared responsibility, but one where the underlying service provider brings a skills, experience and scale that most enterprises don’t have in house.

This is why cloud is more secure than a private data center– very few enterprises have the skills, budget to protect against a superpower trying to hack your datacenter.

 

Startup Choices Rapidly Becoming Enterprise Choices

IMG_1362//embedr.flickr.com/assets/client-code.js

As I wrote on “The Three Freedoms of Cloud“, cloud is all about agility and speed.  A couple of stories both illustrate and enlarge this point through the adoption of serverless architectures

A Startup Volunteers to Be the Canary in the Code Mine

The first story comes from a startup called Siftr (You know they are a startup because they could only afford one vowel).

The post is on the value of serverless as experienced by a small startup. They chose Google App Engine for their application.  Here’s the relevant quote.

An enterprise, with practically infinite resources at disposal, can afford to be selective about every single component that goes into their tech stack and can recreate the entire wheel. However, for startups, time is gold! Their choice of technology should help them maximize their number of ‘build-&-measure’ iterations and yet should never hinder the product design.

Source: Why choose Google AppEngine for starting-up — Technology @ Siftr — Medium

As a past startup guy, I completely lived this reality.

When I started newScale, we faced a choice of what to use to develop our product. Only three engineers, a short runway of cash, and lack of clarity about our market.  Job one has to be product / market fit.

At the time Java was not really an economic or performant choice ($300K for an app server was the quote from WebLogic; I didn’t see the logic of buying).  C++ (we came from that world) was too complicated, slow and lacked portability, and Perl and those type of languages were unknown to us.

So we chose ColdFusion.

Over the years, I got so much grief for that choice and five years later we re-wrote the app (at great cost) in Java.  So was it the wrong choice?  NO!  That choice allowed to build a rich, enterprise level product, acquire major enterprise clients which let us raise $26M over two rounds.

It was thanks to that initial success that we could afford to re-write it

(New) Reality Arrives to the Enterprise: Fast Eats Big

Market advantages have become transient. The time in which an enterprise could build an effective wall against competitors is fast fading.  Enterprises have responded by beginning to adopt the tooling, practices, and culture of start ups to remain competitive

My friend James Staten, Chief Strategist for Cloud at Microsoft Azure, has some useful research to share regarding the value of moving to PaaS.

Sure, moving your applications to the cloud and adopting more agile DevOps methods will save you money and make your complex application deployments more efficient but if you want the really big gains, focus the majority of your developers back on coding. According to Forrester Consulting, the gains from doing so are massive.

PaaS value chart

Yes, re-hosting or even building cloud-native apps in VMs and containers will give you big gains in agility and cost savings but asking your developers to manage configurations, update the OS and middleware and pick the right virtual resources to support their apps (let alone manage the security of their deployments) is a distraction that cuts into their productivity.

And shares some pretty dramatic numbers about the economic impact.

How much, you ask? In its latest Total Economic Impact Study, Forrester Consulting interviewed a number of current customers of Azure’s PaaS services and concluded that concluded that migrating to PaaS from IaaS resulted in a 466% return on investment. For customers migrating from on-premises environments to PaaS, the return on investment can be even greater. Time to market also improved by as much as fifty percent, because of the efficiency and speed of deploying applications with PaaS services.

James works for Azure but I believe these type of results apply to serverless architectures as a whole. And to be clear, these savings do require the adoption of new operating models such as DevOps. Sticking to waterfall and ITIL won’t get you there.

Also Amazon Web Services

And not to leave Amazon Web Services out, in our Accenture Cloud Platform, we are using Lambda for certain  components.  Our entire system is composed of COTS (we don’t owncode), homegrown components written in Scala, web services from cloud vendors, and now new components written in Python using Lambda.  In other words, not atypical from what you’ll see in a large enterprise that has history and heritage in their infrastructure. Heritage sounds better than legacy.

The value we get is similar to what others are finding:  savings in time to market, flexibility and agility.  And improved costs and security in our case.   Yes, you have to live within the constraints of the underlying PaaS, but that discipline keeps us focused on building value.

But What if It’s the Wrong Choice?

There’s a fear in using these tools that they could be the wrong choice. Well, it could be and eventually it has to be re-platformed, but you’ll either know it quick enough to make a change, or it will be a long ways down the road after you

But if it helped achieve product market fit, time to market, validate market thesis and / or deliver competitive advantage, the fair answer should be: WHO CARES? The business objective has been achieved.

Also, the cost of re-writes and re-platforming is not what it used to be. The cost of constructing new apps has fallen to an all time low thanks to the availability of tools, open source, and web services. In many cases, it will be faster to re-write it than to fix it.

Time to be Rigourous. How to Ruin a Cloud Implementation: Leave Self-Service for Last

I’m republishing this from my old blog. I wrote this in 2011, yet I was having this conversation today.  I realized the move to as-a-Service still requires a methodology. 

Self-service should not be an after-thought in cloud projects, it’s the beginning of the journey.  Self-service drives the standardization of offerings and reduces the labor costs that arise from designing, specifying, ordering, procuring, and configuring computing, storage and network resources on custom basis.

This standardization and automation also applies to application components, security and network services such as LDAP, DNS, load balancers, etc.  I say “drives” because the moment an organization decides to provide infrastructure on demand, three questions arise that are at the heart of the beginning of the cloud journey:

  1. Who are my customers?
  2. What are my offerings?
  3. What are my existing delivery processes?

Like any other business, the question of who are we serving, what do they want to buy, and how do we deliver,  leads to many other questions. These questions are at the beginning of journey to the cloud operating model.

But, haven’t we answered these questions before when we built our ITSM catalog?

Answer: NOT with rigor required by a self-service and automation program.

Once we decide to offer a cloud service, these questions need to be answered with a BRUTAL and RIGOROUS specificity to be codified into self-service choices.  Usually, until the decision to   deliver cloud services, these “customer” catalogs are often vague, warmed-over recapitulation of some well-intentioned static “service-catalog.”

In my experience, very little of that prior work is usable when confronted with the needs of the cloud project. I’m loathe to say it’s useless; sometimes the project team that did the prior catalog is actually in the best position to understand the short comings of the prior work and can be a good contributor to the cloud project.

Getting back on point, once the decision to offer infrastructure as a service to the user community, At this point, the cloud teams faces three tasks:

FIrst, define the customer as a consumer of a set of offerings that address some activity the customer needs to carry out. For example, “test a patch to the scheduling system.”  The team needs to figure out the proper metaphors, abstractions, rules for configuration, rules for consumption, and the choices allowed the customer. And this needs to be in the language and domain of competence of the customer.

This is hard for us technical people. We like lots of choices, we understand a lot of things our users won’t.  The exercise of understanding the customer requires a lot of trial and error.

Sometimes I see some “cloud interfaces” that are really admin consoles to configure compute, network and storage resources. The UI’s are targeted at unicorn users (they don’t exist)  — because developers, the real target of cloud– are usually not network experts, or other than disk space, they don’t know much about storage. Thus this is a unicorn customer–she doesn’t exist!

Second, the cloud team now needs to break down the end to end service delivery processes into it’s component processes, specifying all the hand-offs and activities, the tools and systems used, and the data required to both execute activities and make decisions.

This is where standardized offerings become the difference between success and abject failure as they simplify decisions and data gathering.

If every car is a Model-T, then manufacturing, supply chain, capacity management, procurement, planning are all much easier.  if you have a large variety of options, it’s much harder.  Start small. Amazon web services first compute offering was small linux. That’s it. “sm1.linus” (the original name) is the Model T of the cloud era.

Third, a good gap analyis and coverage plan is needed.   What we tend to find at this stage of cloud design is a gospel of gaps: rules in people’s heads, governance rules that get in the way (hello CAB!), existing systems built for responding in days and weeks rather than minutes and seconds.

Also there are missing systems. Sources of record are unstructured (like a word document or wiki’s) rather than a database or a structured data model.  The few tools that make it lack API’s, were built for a single tenant, do not enforce role-base-access control, or were not designed for consumer use.

Good process design needs to inventory these system and data gaps.

For example, take the task “Assign IP” address. Today, Jason the network admin gets an e-mail, and he opens his spreadsheet and sends it to someone who then assigns it to the virtual machine. Now, we need to enable the user to assign an IP address on self-service basis. So no Jason, no spreadsheet, no manual steps. But we do need to say Yes to a IP Addreses Manager, portlet, lifecycle manager, consumption and reclaim process.

This one example.  If you don’t have simple standards upfront, the number of rules and interactions reaches a complexity where it just doesn’t work and it’s a maintenance nightmare.

Sounds daunting, doesn’t it? Well it is, if the cloud team doesn’t do the proper gap analysis upfront. When they do, it’s just a project like any other.

Don’t Spam Your Timeline

I Hate Getting SPAM in My Mailbox!

For some reason, people spam their own timelines with updates from other apps or services.  One of the worst on Twitter is the “How I did on Twitter”-bot, followed by SWARM, Klout, etc.

Why is this annoying?

For the same reason robo-calls are annoying. You are optimizing my time, but consuming my real life time.

It is a fundamentally disrespectful action. I need to consume real-life time, a very finite commodity, to enable you to optimize your life time.

I don’t expect every thing on Twitter to be smart or relevant. I expect it to be human — with all the smarts, foibles, insights, stupidities, etc — like my own.

I DO expect people to post pictures of food, drinks on weekends. I don’t expect link to white papers. I don’t necessarily care for a picture of a beer on Monday morning, although I respect your life is awesome. But I appreciate a link to white paper on Monday morning.

But not the bots.

Thoughts on Cloud Brokerage

I wrote this post the Summer of ’13 in my old blog.  Bringing it here as it is still relevant in the Fall of ’15.

My thinking has matured, evolved and added. But before I can write that, I need to bring this back.

Service Catalog as a Service Broker. Putting Your aaS to Work

Recently I’ve been involved in a number of conversation around the relationship between service brokers, service definitions, the service catalog and the design of service interfaces.  I’ve encountered lot confusion and wrong assumptions, which have the potential of costing a lot of money.

So as a way to clear up my thinking, I’m going to note a few thoughts on this today. It’s not a finished piece by any means. Wear a hard hat while reading it; pieces will fall.

Let me start by saying I’m vexed by the phrase “Service Broker”. I’m often vexed by people spray painting new words on existing concepts.

One notion is that a service broker is the control plane and panel to obtain on-line services from external vendors. Which is fine, but this is also what a good, modern, non-service desk restricted service catalog provides. I have covered this topic in my last 837 blog posts.

The second meaning for a service broker emphasizes the word “broker”. The broker enables the customer to aggregate and arbitrage services to enable the selection of the service that is cheapest, best, compliant,  name-your-favorite-characteristic.

A common example used by proponents of “brokering” is for the contracting of compute resources, where we may want to decide where to place a workload based on price. After all, a Gigahertz is a Gigahertz everywhere, right. Hmm well, no. The network, storage, location, distance, latency, iOPS, liability protection, certifications, applicable laws and many other factors (Will someone pick up the phone? Is there a number to call?) matter.

I don’t believe there’s any commodification of infrastructure services coming any time soon (like the next ten years). There are just too many facets to internet services such as quality, latency, richness of offerings, laws, SLA’s, data sovereignty and others that prevent making effective like-to-like comparisons on the fly. We will need procurement specialists to do these type of comparative evaluations.

Also, if you drink coffee, you’ll know that coffee, which used to be a commodity, is anything but that today.  Coffee is a lot simpler to acquire than datacenter services. I’ll have my server with half-caf, soy-latte with a twist of lemon.

But even if we could flatten service definitions so they were more comparable (they can be flattened — sourcing specialists do it all the time), the acquisition process still doesn’t match the way the enterprise procures today.

Enterprises engage on two classes of spend: contract and spot. The majority of spend is contract. Why? Risk, pricing, security, control, quality, availability and convenience.

By the way, that’s why enterprises use e-procurement and constantly try to reduce the number of vendors under management.  It’s easier to control a smaller number of providers that are a large part of the enterprise spend, than thousands of small vendors to whom the enterprise’s spend is not meaningful.

For example, take the issue of liability: AWS assumes very limited liability for anything.  Most enterprise contracts have extensive sections around what happens when things fall apart and the network operations center cannot hold.

In my experience reviewing these type of contracts, these type of documents spend a fair amount of time defining the “service” – and it’s not an API call, but it’s the request process, approvals, support, training, incident management, remediation, service level that define the “service”.

By the way, I don’t meant to imply these contracts are actually workable or actionable –most are not– just that a lot of effort goes into creating them to try to define the “service.”

I once spent a week with a team of 15 people trying to convert a massive outsourcing contract into a service catalog. Turns out to be surprisingly easy to do it with 2 people, but impossible with 15.

Two recent examples help make the case for contract buying.  One, Amazon, who does offer compute pricing on a hourly basis, now offers one and three year leases. Why? By the hour pricing is like by the hour hotels, very expensive if you plan to live there a year.You are better off with a lease.

Second, the CIA announced a $600M contract with Amazon to build a cloud.

Well, they are the CIA, someone might say. Of course they need something more secure.  To which I’d say, baby, today we are all the CIA.

Also, if you read the drivers for establishing strategic sourcing and procurement processes, you’ll find a  analogous use cases to brokering: to stop maverick buying (we call it rogue in cloud), to reduce costs, implement governance, reduce or eliminate manual processes, apply consistent controls, rationalize the supplier base.

So it does seem like the service broker concept staples a service catalog to a procurement system; but will it cross the cloud?

As for the idea that somehow an intermediary can establish a market place of services and “broker” between enterprise and service provider, I don’t buy it – pun fully intended.

This approach did not work in the dot-bomb era with e-marketplaces. Turns out the buying power of large companies is significant, the internet flattens information brokerage and margins are way too thin for this intermediaries to earn much money.

As for brokerages serving  small and medium size businesses, I’d say yeah, sure. They are called resellers, like PC Connection, or TigerDirect, or yes, Amazon retail.  This is not a new development.

In summary, there are considerable transaction costs that need to be accounted for in the notion of service brokers. In the service catalog world we talk about policy, governance and automation as the way to get those contract terms implemented. In fact, most enterprise buying is contract buying and not spot buying.

I’ve argued that a service catalog already provides the functionality of a service broker and that there’s unlikely to be a marketplace of services that is profitable or that enterprises will change buying behaviors in the next ten years.

So is there anything new about this service broker concept that is worth considering? And the answer is YES.  The advent of service interfaces aka API’s opens a new service channel.

So for the first time we have the opportunity to design services that are born-automated. How do we think about them? What are their characteristics?

That is worth exploring in its own blog post.

As I said at the beginning, these are my preliminary thoughts. Comments? Questions? Observations? Organic, heirloom tomatoes?

Hi! We have a refugee crisis, it’s bad but it’s better than you think. Meet your local, friendly refugee!!

Hi. Nice to meet you. I would like to tell you a story about refugees and persecuted people that it’s fun and hopefully makes your day.

As we watch the news in horror, tens of thousands of refugee’s are dying crossing the the Mediterranean sea, walking through Greece into Macedonia and Hungary trying to get into France, England, Nordic countries. Anywhere where they can feel safe.

They need help, but they are not victims.

They are strong, resilient people who have taken their destiny onto their hands. They have chosen life over death. Peace over war.  Freedom over tyranny. They might make excellent neighbors once you get to know them. The block parties and bbq’s are going to be tasty.

At this point, they need help. They’ll be fine, once we help.

You may wonder, how  am I so sure?

Well, I’m a refugee.

I came with my family in 1976 under a refugee visa.  My dad spent three years in concentration camps. One day, I’m at school and my mother comes in and says, we are leaving. We met our dad at the airport, took a plane to San Francisco and here we were. Never to return where I grew up.

That visa is pretty legit. You are stamped REFUGEE and you have one year to get the necessary “green card” for residence. (Another story for another time).

I apologize if the suffering is not enough. Branniff airplanes were nice –first time on a plane. But being exiled from your country and family is a universal quality all refugees share. In taxis, hotels, airports, hospitals, restaurants.. we know each other. In a few words. We know each other. Uber drivers rate us five stars. We help each other.

So hello again, you now know an actual refugee.

One that has made a life here, played in a punk band, wrote a book, got married, had “anchor” children (Yet another story!), started company, employed hundreds’ of people, and whose technological innovation is core to cloud computing (or so I’d like to think). Right now I’m involved in helping global 2000 adopt the next generation of cloud technology. And yes, I am a refugee.

Some people know me as the founder of newScale, a software company acquired by Cisco in 2011. Some people would say I’m a “job creator”. I’d say I like inventing futures. But my influence is nothing compared to other refugees / immigrants like Andrew Grove, who escaped Hungary’s communist regime. Or Google’s Sergey Brin, whose parents emigrated from Russia.  All refugees. Steve Jobs’ dad was Syrian.

Hopefully, this gives you hope.

Immigration is a big challenge for any society. Yet it’s a big opportunity for our European brothers.   This massive, ambitious, bold group of people. If welcomed, integrated and helped might create the next European dream.

I write this because between the news narrative, native’s pejorative, the wonder of what could be is being lost.

So let’s start here. Hello. I’m your friendly, local refugee. Ask me anything.

Mind The (Widening, Accelerating Innovation) Gap

Mind the Gap - ink and wash

This is a meditation on the economics of innovation and minding the innovation gap.

It started when I read this article,Productivity Is Soaring at Top Firms and Sluggish Everywhere Else on the Harvard Business Review site.  The article has a terrific and terrifying graphic that shows the widening gap in productivity between firms that are innovating and their competitors.

Perhaps more importantly, the gap between the globally most productive firms and the rest has been increasing over time, especially in the services sector. Some firms clearly “get it” and others don’t, and the divide between the two groups is growing over time.

The strength of global frontier firms lies in their capacity to innovate, which increasingly requires more than just investing in R&D and implementing technology. It requires the capacity to combine technological, organizational, and human capital improvements, globally.

What the author, Chiara Criscuolo, calls Frontier firms are organizations that are experimenting and trying new things at the edge.  This got me thinking that it maps well to arguments I have made before regarding agility, the freedom to fail as the real strategic business value of public cloud.

The piece neatly makes the  case that “cost savings,” if it comes at the cost of agility and freedom to experiment may produce “false savings” when they cause the firm to fall behind.  Unfortunately, many senior leaders don’t formally track opportunity costs or productivity sink-holes the way they track procurement costs.

This translates into the internet meme that says: “Spend a $500 and get three levels of approvals and controls.  Call a standing meeting with 20 people? No one bats an eye.”

These “false savings” are effectively borrowing from the future to make the present seem rosy.

I’ve seen these behavior in sales organizations, where deals are “pulled-in” from future quarters to make this quarter look fine. I’ve seen it in engineering groups, where short term hacks result in “technical debt” that will be expensive to fix later.

You’ve probably experienced it when you get hidden fees from your bank or airline, which as clearly chosen “cash now” at the expense of customer satisfaction and/or brand identity.

Unfortunately, that debt does not show in the balance sheet where it could be controlled and balanced, but it is there. It will show up as increased costs of sale and loss of margin. Or product falling behind competitors and losing market share. Or customers moving to an online digital service with clear consumption pricing.

Whether the the firm chooses to compete with product, customer or operational innovation, there needs to be a “speaker for the future”– call them Chief Technology Officer, or CIO or Innovation officer. The name doesn’t matter; what matters is that it’s necessary to have someone(s) helping the rest of the organization guide the present decisions and map them to the future, to speak for the future.

CTO’s sometimes are also asked to be chief cheerleaders and marketers — that’s part of the job but that’s not the entire job.  The core part of the job entails getting deeply involved in the design of products and services, articulating a vision and an innovation road map, and helping senior leaders understand the trade-offs between now and later.

The Power of Cloud Driving Business Innovation

This is my talk about cloud at Hawaiian telecom university for non-technical people.

There are four parts to this talk.

First, what is cloud?

Second, how the world is changing, which I call “Age of Miracles” This is followed by a word from my sponsor about the intelligent business cloud and work we do at Accenture.

Then I close with a bit of how markets are being disrupted by cloud services — called, naturally, Age of Disruption.

This conference is attended by a large number of tiny, small and medium size business in Hawaii. So I felt a special obligation to try to summarize all this tech into a “What does it mean for me, personally? What should I do?”  Hopefully the intention comes through.

I was asked to wear Hawaiian shirt and a lei. This dude abides.

Cloud Blueprints Give Developers Freedom to Innovate

"1950 Architect Table with House Model***" i like the crumpled blueprints1950 Architect Table with House Model

This post was originally posted on the Chef blog. It’s reposted below for your convenience.

As a community, cloud developers are not prone to follow specific directions to deliver an assignment. Like pirates and cowboys, many developers would rather be left to their own devices to create solutions then follow an “Insert Code A into Line B” regimen.

So you’d think developers would take umbrage at a technology called blueprinting, which sounds a little too much like painting-by-numbers. But the fast-developing world of cloud generally, and blueprinting specifically, is evolving into an ideal environment for giving them more freedom to innovate, not less. Here’s how.

A blueprint is simply a template that describes infrastructure requirements, server configuration requirements, and application configuration requirements that can be built up quickly in an environment in a repeatable, reliable, and predictable way—and always in compliance.

This is an ideal solution for development and test teams, who need to deploy complex application stacks on a regular basis. This activity is labor intensive, error-prone and time consuming.  Blueprints make it easy for architects to define a complete application stack and provide a simple, repeatable one-click deployment, thus saving weeks of effort and enabling faster development and QA cycle times.

For developers, the value is that they can quickly launch reconfigured environments or reconfigured platforms in the cloud of their choice, a tremendous time saver. Instead of manually configuring these environments, they can be launched from a blueprint–and the list of available platform components is growing. For example, some users like to develop and test on a public cloud platform such as Amazon, then move the project to private cloud for deployment. Blueprinting technologies are helpful in making that happen by providing easy, one-click ordering for application development teams.

Chef is another central cloud technology that, when paired with blueprinting, helps developers do their job better. Chef allows developers to deliver code that automatically installs and configures their applications in true DevOps fashion. By defining the relationship between servers and required Chef roles in a blueprint, the developer is assured that their application is properly installed: repeatedly, reliably and consistently.

In short, blueprints allow precise specification of infrastructure and application configuration requirements, while providing flexibility for customization. Blueprints are enabling—not controlling.

Accenture understands that, in a sense, developers are the real target of cloud. So we’ve filled the Accenture Cloud Platform with developer-friendly tools and self-service catalogs. Accenture Tools in the Cloudis a manageable and scalable solution that offers projects the ability to significantly reduce lead times for getting development and test environments up and running while having continuous integration and delivery tools such as Git, Sonar, Nexus and Jenkins deployed and ready to use. The result? Developers can bring their cloud application ideas and solutions to the customer faster.

Blueprinting saves developers time, encourages innovation, and allows them to focus on what they do best.

How to Use the Dial Phone

An example of basic training for the first self-serve cloud service.

The transition from operator to self-service required teaching new practices like “dialing”, use of a “phone-book” and what is a “dial tone”.

“0” was the help-line to the operator.

What is intuitive to a young generation is not to to an older generation.

What is happening with cloud services is that a new generation is building, thinking and finding opportunities that the data center generation cannot even comprehend.

Evolving Private Clouds: Welcome Fishermen, Hunters, Cloud Admins & Other Liars

Liars Welcome

I’ve been meditating on Tom Bittman’s recent blog post, Why Are 95% of Private Clouds Failing? and what it means for the evolution hybrid cloud applications.

My colleagues at newScale, then Cisco and now at my current employer have been watching the same phenomenon.  We thought there was a high rate of struggle / flailing and probably a high rate of failure, but the number in my head was more like 40%.

To me, 95% is stunning. Rarely in tech do we 95% of anything. But 95% failure tells there’s something deeply wrong about what people think a private cloud is.

Tom’s recent poll shows 31% fail to change the operational model, 13% fail to change the financial model, and 11% spend too much time defending the status quo / doing too much and 10% on the wrong benefits.

From my point of view, these really all go into the bag of “failure to change the operational model.”

What does “failure to change the operational model mean?  On the ground, it looks feel like this:

IT doesn’t know what a service offering should be, the capacity management process is barely sufficient for static infrastructure. It  involves long planning processes,  ok for static capacity, but not adequate for real time infrastructure.

  • IT policies are executed through manual processes and forms that involve many reviews, not automated and controlled by software in real time. This introduces friction and delays to provisioning.
  • There maybe a little automation, but the developers don’t have access to this automation nor the API’s.
  • The financials are difficult. Either there was no chargeback, or IT doesn’t know the unit costs of their offerings–hard to know if IT doesn’t know the offerings.
  • It’s impossible to provide elasticity on a fixed amount of finite infrastructure, so more policies and friction are added to prevent over-consumption
  • App teams want all the goodies of cloud (elasticity, on-demand, etc) but are not able or aware that they probably need to make changes to their development process and architecture to really take advantage of a cloud architecture.

In other words, IT Operations “private cloud” is  something that is an evolution of virtualization, but it’s really not a cloud (standard, metered, on-demand, automated, elastic, etc).  As I wrote a few year ago, we go from From cruising at self-service speed to “we gotta have a meeting”

Meanwhile, over the last few years, developers have adopted cloud services en masse.. These services are elastic, on-demand, metered, elastic, etc.  No work needed to achieve that level of maturity required –there are other challenges, but those have become increasingly easier to deal, while the transformation challenge remains stubbornly immovable.

The result has been a kind of silo’d hybrid cloud. Things work one way in public, the tools are different, the people are different.  Something that Gartner has started calling bimodal IT. A kind of truce under the clouds.

Unfortunately, apps have gone hybrid and this truce has become war by other means rather than peace.  The private cloud that isn’t real can’t deliver what the developers need.

Hybrid cloud gets real

The biggest opportunities for enterprises today lie in how they will engage and interact with their customers. And these customers want to interact on their smart phones and tablets, while hanging out on Facebook, Pinterest, Twitter. The kids on Snapchat and WhatsUp, the parents deciding where to live on Zillow. These are new ways of living, shopping, entertaining, and socializing.

Meanwhile, the marketing and sales groups want campaigns to engage their customers where they are, in the way the customer wants to engage.  And the business wants measures and metrics to understand what’s effective and working in their marketing spend.  And which boss wouldn’t like to have a (positive) viral video?

This implies that to measure things that couldn’t be measured before, the enterprise may need big data to process those Google search streams, it needs a scalable video architecture to deliver those viral videos, it  need it’s login integrated with Facebook and Twitter to make it easy for the customer to engage — plus it gets better data from Facebook on what they like than from its traditional transaction systems.

All of these new capabilities also need to integrate with the enterprise traditional systems of record: ERP, sales, etc.

This is how we arrive at our hybrid app era: on the back of cat videos and a billion likes. These are today’s equivalent of the old appointment TV or water cooler shows of the sixties and seventies; which were underpinned by the need for consumer marketing and brand building by enterprises.

Marketing is not the only use case, but today this encompasses any enterprise that uses the internet to engage and transact business: real estate, banking, credit, insurance, travel, etc, etc — all are now internet businesses with a clickstream and a need to engage with their customers.

This year hybrid gets real because applications have gone hybrid

Hybrid cloud management now needs to be put in place. Not because hybrid clouds require it, but because there’s now enough applications across multiple places that this brings a new management problem: how do you managed both?

We noticed that the market is asking for a single control, management and service plane that encompasses both public clouds and private clouds.

And this is where the old private cloud – the one that failed to deliver – has to be matured to the next level or it won’t be able to be managed along with public cloud resources.

We are at the end of the era of fake private clouds and at the beginning of the hybrid cloud era.

Watching Clouds. Enabling Crop Rotation

Furrow and Ridge.

Recently I’ve been thinking about innovation and cloud. Basically my observation is that cloud allows C-level executives to run more experiments faster and safely.

The fact that most ROI/TCO calculations don’t effectively account for this phenomenon is a problem of bad math / lazy thinking.  It is simply a matter of time before we have a new accounting model that measures innovation as capital. Any model that can’t account for opportunity size, market growth or opportunity costs is insufficient in 2015.

Which brings me to this article from the NY Times about risk.  Reading it helped me think through what I’m seeing about the value of cloud computing: which is not the only the ability to take more risks, but also the diminution of the negative consequences of failure.

In pre-modern times, when starvation was common and there was little social insurance outside your clan, every individual bore the risk of any new idea. As a result, risks simply weren’t worth taking. If a clever idea for a crop rotation failed or an enhanced plow was ineffective, a farmer’s family might not get enough to eat. Children might die. Even if the innovation worked, any peasant who found himself with an abundance of crops would most likely soon find a representative of the local lord coming along to claim it. A similar process, one in which success was stolen and failure could be lethal, also ensured that carpenters, cobblers, bakers and the other skilled artisans would only innovate slowly, if at all. So most people adjusted accordingly by living near arable land, having as many children as possible (a good insurance policy) and playing it safe.

Our relationship with innovation finally began to change, however, during the Industrial Revolution. While individual inventors like James Watt and Eli Whitney tend to receive most of the credit, perhaps the most significant changes were not technological but rather legal and financial. The rise of stocks and bonds, patents and agricultural futures allowed a large number of people to broadly share the risks of possible failure and the rewards of potential success. If it weren’t for these tools, a tinkerer like Perkin would never have been messing around with an attempt at artificial quinine in the first place. And he wouldn’t have had any way to capitalize on his idea. Anyway, he probably would have been too consumed by tilling land and raising children.

via Welcome to the Failure Age! – NYTimes.com.

Power in large organizations is clannish. Who you grew up with, who knows you, who you know, the network of favors you built provide both protection and support to get projects done.

But there are other clans who have different agendas.  Hard to experiment in such an environment.

For C-level execs, innovation today is a bit like betting on crop rotation: They can’t be wrong!  The C-level executive has to assure they are  right, then report they are right, and worse-comes-to-worse bend the metrics to show they were right.

No room for  I think this might work, but I’m not sure.

The enterprise corporate culture requires certainty: Commitment to metrics before they are reasonable, models before these are proven and projections that cannot be grounded. This is why failed projects last three years; it’s the minimum depreciation time for assets. Everyone knows it’s a zombie project, but no one can tell the C-level executive that they failed.

Who would want to sign up for that mission?  Most enterprise managers don’t.

It’s the equivalent of experimenting with crop rotation.

It might work. Or your family might starve. Even if it works, the Warlord will come take your results anyway.

The ability to run more experiments safely is right up there with the legal and financial innovation that increased our culture’s ability to innovate.  Making it easier, safer to try new things changes the economics of running experiments.

The fact that most don’t know how to model this on a spreadsheet is not stopping innovators from going forward. In facts, most CxO’s know this intuitively.

Not everyone is comfortable with change and ambiguity. There are many people like me: permanent immigrants. We need to try new things because the past is not our friend.

I believe we are all immigrants into a cloudy future and the risk of not taking a risk is now higher than the risk itself.

The Three Freedoms of Cloud – A CxO view

"Freedom 2001" by  Zenos Frudakis

I have been CEO, CTO or board member of both a small start up and a large enterprise these last 10 years. Every budget year has been about achieving economic growth and meeting objectives. I have never seen “cost” reduction as a primary objective.

Cost reduction might show up as a secondary objective: either as a way to free up resources to fund innovation, or as a logical step to improve margin after the main objective has been achieved.

I’ve never been to a board meeting or operations review about cost reduction. It’s always about growth objectives. Sure, we review the budget, but as a CxO – that is just the basics–We are expected to stay on budget. It’s like wearing clean clothes to work; of course you should, but that’s not the job.

Business success is all about innovation. Growth does not occur without innovation. Innovation feeds on investment—So where do those resources come from?

The good news: You likely already have the capital. You just don’t see it, because because it’s stranded into a pile of amorphous activities and legacy maintenance.

You are probably using those resources to address the daily routines that keep the lights on.  Like patching long-time legacy apps with spit and baling wire. Like managing your aging application implementations from a rattling, smoking collection of dashboards and consoles.

It’s a fact that 70 to 90 percent of IT budgets are earmarked for “run operations,” otherwise known as Keeping The Lights On. This is where your innovation capital is stranded.

These resources can be freed up by cloud products and services to make your organization faster, more agile—more innovative.

The Three Freedoms that cloud can underwrite:

1. Freedom to experiment. The cloud allows you to run more experiments due to lower start up capex costs per idea tried. For every executive, using cloud to demonstrate early success and scale, all with a provident solution, is the equivalent of having the Sword of Excalibur when next year’s budget battle gets going.

2. Freedom to Fail (Quietly). It’s true that we celebrate innovation, but there’s also a very real tendency to punish failure. The desire to experiment is quashed and promising ideas don’t get considered if they are seen as too risky. Meanwhile, other projects get over-funded to mitigate risk, or they wheeze forward on life support because no one wants to admit failure. In both cases, working capital that could be used for interesting initiatives is unavailable. Too bad, because the ability to run small projects without capital expenditures represents freedom to fail, allowing experiments to succeed step by step or to be shut down quickly, thus avoiding the three-year run to equipment depreciation.

3. Freedom to be agile. I’ve been a startup guy, founder of VC-funded newScale, which was acquired by Cisco in 2011. My background is all about competing against incumbent behemoths with established reputations and customer relations. So how do startups compete? By moving fast, running faster than the competitor’s innovation cycle time, and marshaling superior domain expertise at the point of sale. Large companies have a hard time competing against such agility unless they are agile themselves. Cloud represents the opportunity for large organizations to streamline—getting rid of all kinds of friction in the innovation process and enabling senior management to focus on core problems and differentiated valued-added efforts. This can happen through offerings such as on-demand infrastructure or platform as a service, which can save months of configuration and planning; software as a service, which delivers a solution ready to go; and business process as a service, which gets us away from reliance on hardware, software and operations. The ability to strategically have these as-a-service tools in your innovation toolkit is a game changer.