Startup Choices Rapidly Becoming Enterprise Choices

IMG_1362//embedr.flickr.com/assets/client-code.js

As I wrote on “The Three Freedoms of Cloud“, cloud is all about agility and speed.  A couple of stories both illustrate and enlarge this point through the adoption of serverless architectures

A Startup Volunteers to Be the Canary in the Code Mine

The first story comes from a startup called Siftr (You know they are a startup because they could only afford one vowel).

The post is on the value of serverless as experienced by a small startup. They chose Google App Engine for their application.  Here’s the relevant quote.

An enterprise, with practically infinite resources at disposal, can afford to be selective about every single component that goes into their tech stack and can recreate the entire wheel. However, for startups, time is gold! Their choice of technology should help them maximize their number of ‘build-&-measure’ iterations and yet should never hinder the product design.

Source: Why choose Google AppEngine for starting-up — Technology @ Siftr — Medium

As a past startup guy, I completely lived this reality.

When I started newScale, we faced a choice of what to use to develop our product. Only three engineers, a short runway of cash, and lack of clarity about our market.  Job one has to be product / market fit.

At the time Java was not really an economic or performant choice ($300K for an app server was the quote from WebLogic; I didn’t see the logic of buying).  C++ (we came from that world) was too complicated, slow and lacked portability, and Perl and those type of languages were unknown to us.

So we chose ColdFusion.

Over the years, I got so much grief for that choice and five years later we re-wrote the app (at great cost) in Java.  So was it the wrong choice?  NO!  That choice allowed to build a rich, enterprise level product, acquire major enterprise clients which let us raise $26M over two rounds.

It was thanks to that initial success that we could afford to re-write it

(New) Reality Arrives to the Enterprise: Fast Eats Big

Market advantages have become transient. The time in which an enterprise could build an effective wall against competitors is fast fading.  Enterprises have responded by beginning to adopt the tooling, practices, and culture of start ups to remain competitive

My friend James Staten, Chief Strategist for Cloud at Microsoft Azure, has some useful research to share regarding the value of moving to PaaS.

Sure, moving your applications to the cloud and adopting more agile DevOps methods will save you money and make your complex application deployments more efficient but if you want the really big gains, focus the majority of your developers back on coding. According to Forrester Consulting, the gains from doing so are massive.

PaaS value chart

Yes, re-hosting or even building cloud-native apps in VMs and containers will give you big gains in agility and cost savings but asking your developers to manage configurations, update the OS and middleware and pick the right virtual resources to support their apps (let alone manage the security of their deployments) is a distraction that cuts into their productivity.

And shares some pretty dramatic numbers about the economic impact.

How much, you ask? In its latest Total Economic Impact Study, Forrester Consulting interviewed a number of current customers of Azure’s PaaS services and concluded that concluded that migrating to PaaS from IaaS resulted in a 466% return on investment. For customers migrating from on-premises environments to PaaS, the return on investment can be even greater. Time to market also improved by as much as fifty percent, because of the efficiency and speed of deploying applications with PaaS services.

James works for Azure but I believe these type of results apply to serverless architectures as a whole. And to be clear, these savings do require the adoption of new operating models such as DevOps. Sticking to waterfall and ITIL won’t get you there.

Also Amazon Web Services

And not to leave Amazon Web Services out, in our Accenture Cloud Platform, we are using Lambda for certain  components.  Our entire system is composed of COTS (we don’t owncode), homegrown components written in Scala, web services from cloud vendors, and now new components written in Python using Lambda.  In other words, not atypical from what you’ll see in a large enterprise that has history and heritage in their infrastructure. Heritage sounds better than legacy.

The value we get is similar to what others are finding:  savings in time to market, flexibility and agility.  And improved costs and security in our case.   Yes, you have to live within the constraints of the underlying PaaS, but that discipline keeps us focused on building value.

But What if It’s the Wrong Choice?

There’s a fear in using these tools that they could be the wrong choice. Well, it could be and eventually it has to be re-platformed, but you’ll either know it quick enough to make a change, or it will be a long ways down the road after you

But if it helped achieve product market fit, time to market, validate market thesis and / or deliver competitive advantage, the fair answer should be: WHO CARES? The business objective has been achieved.

Also, the cost of re-writes and re-platforming is not what it used to be. The cost of constructing new apps has fallen to an all time low thanks to the availability of tools, open source, and web services. In many cases, it will be faster to re-write it than to fix it.

Time to be Rigourous. How to Ruin a Cloud Implementation: Leave Self-Service for Last

I’m republishing this from my old blog. I wrote this in 2011, yet I was having this conversation today.  I realized the move to as-a-Service still requires a methodology. 

Self-service should not be an after-thought in cloud projects, it’s the beginning of the journey.  Self-service drives the standardization of offerings and reduces the labor costs that arise from designing, specifying, ordering, procuring, and configuring computing, storage and network resources on custom basis.

This standardization and automation also applies to application components, security and network services such as LDAP, DNS, load balancers, etc.  I say “drives” because the moment an organization decides to provide infrastructure on demand, three questions arise that are at the heart of the beginning of the cloud journey:

  1. Who are my customers?
  2. What are my offerings?
  3. What are my existing delivery processes?

Like any other business, the question of who are we serving, what do they want to buy, and how do we deliver,  leads to many other questions. These questions are at the beginning of journey to the cloud operating model.

But, haven’t we answered these questions before when we built our ITSM catalog?

Answer: NOT with rigor required by a self-service and automation program.

Once we decide to offer a cloud service, these questions need to be answered with a BRUTAL and RIGOROUS specificity to be codified into self-service choices.  Usually, until the decision to   deliver cloud services, these “customer” catalogs are often vague, warmed-over recapitulation of some well-intentioned static “service-catalog.”

In my experience, very little of that prior work is usable when confronted with the needs of the cloud project. I’m loathe to say it’s useless; sometimes the project team that did the prior catalog is actually in the best position to understand the short comings of the prior work and can be a good contributor to the cloud project.

Getting back on point, once the decision to offer infrastructure as a service to the user community, At this point, the cloud teams faces three tasks:

FIrst, define the customer as a consumer of a set of offerings that address some activity the customer needs to carry out. For example, “test a patch to the scheduling system.”  The team needs to figure out the proper metaphors, abstractions, rules for configuration, rules for consumption, and the choices allowed the customer. And this needs to be in the language and domain of competence of the customer.

This is hard for us technical people. We like lots of choices, we understand a lot of things our users won’t.  The exercise of understanding the customer requires a lot of trial and error.

Sometimes I see some “cloud interfaces” that are really admin consoles to configure compute, network and storage resources. The UI’s are targeted at unicorn users (they don’t exist)  — because developers, the real target of cloud– are usually not network experts, or other than disk space, they don’t know much about storage. Thus this is a unicorn customer–she doesn’t exist!

Second, the cloud team now needs to break down the end to end service delivery processes into it’s component processes, specifying all the hand-offs and activities, the tools and systems used, and the data required to both execute activities and make decisions.

This is where standardized offerings become the difference between success and abject failure as they simplify decisions and data gathering.

If every car is a Model-T, then manufacturing, supply chain, capacity management, procurement, planning are all much easier.  if you have a large variety of options, it’s much harder.  Start small. Amazon web services first compute offering was small linux. That’s it. “sm1.linus” (the original name) is the Model T of the cloud era.

Third, a good gap analyis and coverage plan is needed.   What we tend to find at this stage of cloud design is a gospel of gaps: rules in people’s heads, governance rules that get in the way (hello CAB!), existing systems built for responding in days and weeks rather than minutes and seconds.

Also there are missing systems. Sources of record are unstructured (like a word document or wiki’s) rather than a database or a structured data model.  The few tools that make it lack API’s, were built for a single tenant, do not enforce role-base-access control, or were not designed for consumer use.

Good process design needs to inventory these system and data gaps.

For example, take the task “Assign IP” address. Today, Jason the network admin gets an e-mail, and he opens his spreadsheet and sends it to someone who then assigns it to the virtual machine. Now, we need to enable the user to assign an IP address on self-service basis. So no Jason, no spreadsheet, no manual steps. But we do need to say Yes to a IP Addreses Manager, portlet, lifecycle manager, consumption and reclaim process.

This one example.  If you don’t have simple standards upfront, the number of rules and interactions reaches a complexity where it just doesn’t work and it’s a maintenance nightmare.

Sounds daunting, doesn’t it? Well it is, if the cloud team doesn’t do the proper gap analysis upfront. When they do, it’s just a project like any other.

Don’t Spam Your Timeline

I Hate Getting SPAM in My Mailbox!

For some reason, people spam their own timelines with updates from other apps or services.  One of the worst on Twitter is the “How I did on Twitter”-bot, followed by SWARM, Klout, etc.

Why is this annoying?

For the same reason robo-calls are annoying. You are optimizing my time, but consuming my real life time.

It is a fundamentally disrespectful action. I need to consume real-life time, a very finite commodity, to enable you to optimize your life time.

I don’t expect every thing on Twitter to be smart or relevant. I expect it to be human — with all the smarts, foibles, insights, stupidities, etc — like my own.

I DO expect people to post pictures of food, drinks on weekends. I don’t expect link to white papers. I don’t necessarily care for a picture of a beer on Monday morning, although I respect your life is awesome. But I appreciate a link to white paper on Monday morning.

But not the bots.

Thoughts on Cloud Brokerage

I wrote this post the Summer of ’13 in my old blog.  Bringing it here as it is still relevant in the Fall of ’15.

My thinking has matured, evolved and added. But before I can write that, I need to bring this back.

Service Catalog as a Service Broker. Putting Your aaS to Work

Recently I’ve been involved in a number of conversation around the relationship between service brokers, service definitions, the service catalog and the design of service interfaces.  I’ve encountered lot confusion and wrong assumptions, which have the potential of costing a lot of money.

So as a way to clear up my thinking, I’m going to note a few thoughts on this today. It’s not a finished piece by any means. Wear a hard hat while reading it; pieces will fall.

Let me start by saying I’m vexed by the phrase “Service Broker”. I’m often vexed by people spray painting new words on existing concepts.

One notion is that a service broker is the control plane and panel to obtain on-line services from external vendors. Which is fine, but this is also what a good, modern, non-service desk restricted service catalog provides. I have covered this topic in my last 837 blog posts.

The second meaning for a service broker emphasizes the word “broker”. The broker enables the customer to aggregate and arbitrage services to enable the selection of the service that is cheapest, best, compliant,  name-your-favorite-characteristic.

A common example used by proponents of “brokering” is for the contracting of compute resources, where we may want to decide where to place a workload based on price. After all, a Gigahertz is a Gigahertz everywhere, right. Hmm well, no. The network, storage, location, distance, latency, iOPS, liability protection, certifications, applicable laws and many other factors (Will someone pick up the phone? Is there a number to call?) matter.

I don’t believe there’s any commodification of infrastructure services coming any time soon (like the next ten years). There are just too many facets to internet services such as quality, latency, richness of offerings, laws, SLA’s, data sovereignty and others that prevent making effective like-to-like comparisons on the fly. We will need procurement specialists to do these type of comparative evaluations.

Also, if you drink coffee, you’ll know that coffee, which used to be a commodity, is anything but that today.  Coffee is a lot simpler to acquire than datacenter services. I’ll have my server with half-caf, soy-latte with a twist of lemon.

But even if we could flatten service definitions so they were more comparable (they can be flattened — sourcing specialists do it all the time), the acquisition process still doesn’t match the way the enterprise procures today.

Enterprises engage on two classes of spend: contract and spot. The majority of spend is contract. Why? Risk, pricing, security, control, quality, availability and convenience.

By the way, that’s why enterprises use e-procurement and constantly try to reduce the number of vendors under management.  It’s easier to control a smaller number of providers that are a large part of the enterprise spend, than thousands of small vendors to whom the enterprise’s spend is not meaningful.

For example, take the issue of liability: AWS assumes very limited liability for anything.  Most enterprise contracts have extensive sections around what happens when things fall apart and the network operations center cannot hold.

In my experience reviewing these type of contracts, these type of documents spend a fair amount of time defining the “service” – and it’s not an API call, but it’s the request process, approvals, support, training, incident management, remediation, service level that define the “service”.

By the way, I don’t meant to imply these contracts are actually workable or actionable –most are not– just that a lot of effort goes into creating them to try to define the “service.”

I once spent a week with a team of 15 people trying to convert a massive outsourcing contract into a service catalog. Turns out to be surprisingly easy to do it with 2 people, but impossible with 15.

Two recent examples help make the case for contract buying.  One, Amazon, who does offer compute pricing on a hourly basis, now offers one and three year leases. Why? By the hour pricing is like by the hour hotels, very expensive if you plan to live there a year.You are better off with a lease.

Second, the CIA announced a $600M contract with Amazon to build a cloud.

Well, they are the CIA, someone might say. Of course they need something more secure.  To which I’d say, baby, today we are all the CIA.

Also, if you read the drivers for establishing strategic sourcing and procurement processes, you’ll find a  analogous use cases to brokering: to stop maverick buying (we call it rogue in cloud), to reduce costs, implement governance, reduce or eliminate manual processes, apply consistent controls, rationalize the supplier base.

So it does seem like the service broker concept staples a service catalog to a procurement system; but will it cross the cloud?

As for the idea that somehow an intermediary can establish a market place of services and “broker” between enterprise and service provider, I don’t buy it – pun fully intended.

This approach did not work in the dot-bomb era with e-marketplaces. Turns out the buying power of large companies is significant, the internet flattens information brokerage and margins are way too thin for this intermediaries to earn much money.

As for brokerages serving  small and medium size businesses, I’d say yeah, sure. They are called resellers, like PC Connection, or TigerDirect, or yes, Amazon retail.  This is not a new development.

In summary, there are considerable transaction costs that need to be accounted for in the notion of service brokers. In the service catalog world we talk about policy, governance and automation as the way to get those contract terms implemented. In fact, most enterprise buying is contract buying and not spot buying.

I’ve argued that a service catalog already provides the functionality of a service broker and that there’s unlikely to be a marketplace of services that is profitable or that enterprises will change buying behaviors in the next ten years.

So is there anything new about this service broker concept that is worth considering? And the answer is YES.  The advent of service interfaces aka API’s opens a new service channel.

So for the first time we have the opportunity to design services that are born-automated. How do we think about them? What are their characteristics?

That is worth exploring in its own blog post.

As I said at the beginning, these are my preliminary thoughts. Comments? Questions? Observations? Organic, heirloom tomatoes?

Hi! We have a refugee crisis, it’s bad but it’s better than you think. Meet your local, friendly refugee!!

Hi. Nice to meet you. I would like to tell you a story about refugees and persecuted people that it’s fun and hopefully makes your day.

As we watch the news in horror, tens of thousands of refugee’s are dying crossing the the Mediterranean sea, walking through Greece into Macedonia and Hungary trying to get into France, England, Nordic countries. Anywhere where they can feel safe.

They need help, but they are not victims.

They are strong, resilient people who have taken their destiny onto their hands. They have chosen life over death. Peace over war.  Freedom over tyranny. They might make excellent neighbors once you get to know them. The block parties and bbq’s are going to be tasty.

At this point, they need help. They’ll be fine, once we help.

You may wonder, how  am I so sure?

Well, I’m a refugee.

I came with my family in 1976 under a refugee visa.  My dad spent three years in concentration camps. One day, I’m at school and my mother comes in and says, we are leaving. We met our dad at the airport, took a plane to San Francisco and here we were. Never to return where I grew up.

That visa is pretty legit. You are stamped REFUGEE and you have one year to get the necessary “green card” for residence. (Another story for another time).

I apologize if the suffering is not enough. Branniff airplanes were nice –first time on a plane. But being exiled from your country and family is a universal quality all refugees share. In taxis, hotels, airports, hospitals, restaurants.. we know each other. In a few words. We know each other. Uber drivers rate us five stars. We help each other.

So hello again, you now know an actual refugee.

One that has made a life here, played in a punk band, wrote a book, got married, had “anchor” children (Yet another story!), started company, employed hundreds’ of people, and whose technological innovation is core to cloud computing (or so I’d like to think). Right now I’m involved in helping global 2000 adopt the next generation of cloud technology. And yes, I am a refugee.

Some people know me as the founder of newScale, a software company acquired by Cisco in 2011. Some people would say I’m a “job creator”. I’d say I like inventing futures. But my influence is nothing compared to other refugees / immigrants like Andrew Grove, who escaped Hungary’s communist regime. Or Google’s Sergey Brin, whose parents emigrated from Russia.  All refugees. Steve Jobs’ dad was Syrian.

Hopefully, this gives you hope.

Immigration is a big challenge for any society. Yet it’s a big opportunity for our European brothers.   This massive, ambitious, bold group of people. If welcomed, integrated and helped might create the next European dream.

I write this because between the news narrative, native’s pejorative, the wonder of what could be is being lost.

So let’s start here. Hello. I’m your friendly, local refugee. Ask me anything.

Mind The (Widening, Accelerating Innovation) Gap

Mind the Gap - ink and wash

This is a meditation on the economics of innovation and minding the innovation gap.

It started when I read this article,Productivity Is Soaring at Top Firms and Sluggish Everywhere Else on the Harvard Business Review site.  The article has a terrific and terrifying graphic that shows the widening gap in productivity between firms that are innovating and their competitors.

Perhaps more importantly, the gap between the globally most productive firms and the rest has been increasing over time, especially in the services sector. Some firms clearly “get it” and others don’t, and the divide between the two groups is growing over time.

The strength of global frontier firms lies in their capacity to innovate, which increasingly requires more than just investing in R&D and implementing technology. It requires the capacity to combine technological, organizational, and human capital improvements, globally.

What the author, Chiara Criscuolo, calls Frontier firms are organizations that are experimenting and trying new things at the edge.  This got me thinking that it maps well to arguments I have made before regarding agility, the freedom to fail as the real strategic business value of public cloud.

The piece neatly makes the  case that “cost savings,” if it comes at the cost of agility and freedom to experiment may produce “false savings” when they cause the firm to fall behind.  Unfortunately, many senior leaders don’t formally track opportunity costs or productivity sink-holes the way they track procurement costs.

This translates into the internet meme that says: “Spend a $500 and get three levels of approvals and controls.  Call a standing meeting with 20 people? No one bats an eye.”

These “false savings” are effectively borrowing from the future to make the present seem rosy.

I’ve seen these behavior in sales organizations, where deals are “pulled-in” from future quarters to make this quarter look fine. I’ve seen it in engineering groups, where short term hacks result in “technical debt” that will be expensive to fix later.

You’ve probably experienced it when you get hidden fees from your bank or airline, which as clearly chosen “cash now” at the expense of customer satisfaction and/or brand identity.

Unfortunately, that debt does not show in the balance sheet where it could be controlled and balanced, but it is there. It will show up as increased costs of sale and loss of margin. Or product falling behind competitors and losing market share. Or customers moving to an online digital service with clear consumption pricing.

Whether the the firm chooses to compete with product, customer or operational innovation, there needs to be a “speaker for the future”– call them Chief Technology Officer, or CIO or Innovation officer. The name doesn’t matter; what matters is that it’s necessary to have someone(s) helping the rest of the organization guide the present decisions and map them to the future, to speak for the future.

CTO’s sometimes are also asked to be chief cheerleaders and marketers — that’s part of the job but that’s not the entire job.  The core part of the job entails getting deeply involved in the design of products and services, articulating a vision and an innovation road map, and helping senior leaders understand the trade-offs between now and later.

The Power of Cloud Driving Business Innovation

This is my talk about cloud at Hawaiian telecom university for non-technical people.

There are four parts to this talk.

First, what is cloud?

Second, how the world is changing, which I call “Age of Miracles” This is followed by a word from my sponsor about the intelligent business cloud and work we do at Accenture.

Then I close with a bit of how markets are being disrupted by cloud services — called, naturally, Age of Disruption.

This conference is attended by a large number of tiny, small and medium size business in Hawaii. So I felt a special obligation to try to summarize all this tech into a “What does it mean for me, personally? What should I do?”  Hopefully the intention comes through.

I was asked to wear Hawaiian shirt and a lei. This dude abides.