Anti-capitalist public clouds
Cloud infrastructure is wonderful. It was the main facilitator in the process of “software eating the world”. And the benefits of this process are enormous: easier access to powerful infrastructure, more innovation, new products, and businesses exists that would have been impossible without clouds (some detect cancer and save lives, others let you fly over a virtual representation of your own house). So, from a technology point of view, clouds are wonderful.
From an individual business point of view, they are wonderful, until they aren’t… Let me explain.
The objective of every capitalist is to create capital (“human-created assets that can enhance one’s power to perform economically useful work”). Businesses sell products and earn margins over their fixed and variable costs of creating the products. Every business wants to have as little variable costs as possible because then their profits grow exponentially when they sell more products. During their lifecycle, businesses usually start with high variable costs and work to minimize them by investing in their own productive assets (capital) that can be reused over and over again and yield compound returns on investment. This is how capital is created. A very simple analogy is building your own house instead of renting.
Cloud providers offer variable infrastructure costs to companies, which is very attractive in the beginning. Low usage of the software = low infrastructure costs. But, when the business scales and serves a lot of users, the variable costs grow as fast as (and sometimes faster than) the utilization of the cloud infrastructure. Instead of creating capital, companies pay more and more in rent. Like a restaurant owner who can never get ahead of the game: the more successful the restaurant is, the faster her rent is increased by the landlord.
Cloud providers sell the story of “Focus on creating value for the users! Don’t worry about the infrastructure cost”. And it works great, for them. Just in Q2 2020, AWS generated a $3.36 B profit on a $10,81 B revenue. Not gross margin, but profit after all the costs, including R&D, marketing, taxes, interest, or depreciation! And they are in a capital intense, commodity business, where value-added services are provided using open-source software that anyone can install for free. How is this possible?
Infrastructure migration is a very complicated process. This means it’s expensive: there are direct costs of migration, costs of the new infrastructure, costs of redundancies, operational risk, training, maybe recruiting new talent, or hiring outside help.
Unsurprisingly, switching costs are listed on the prominent 4th place in 7 Powers: The Foundations of Business Strategy by Hamilton Helmer. Cloud providers know this power very well and take full advantage of it.
First, they lure you in. Like a seasoned drug dealer in front of a high school, some will offer the first batch for free. You’ll get a hefty discount for the first 6–12 months. Startups will receive thousands of dollars’ worth of credits. Then, they’ll hook in the developers by offering shiny new toys: new tools, add-ons, dashboards. These toys are easy to deploy and use for the developer and, just like with every new toy, the kid doesn’t have to pay for it. The parents pay the bill! Or, in this case, anyone responsible for the IT budget. The “new toys” offered by different cloud companies are very similar. They are usually forks of the same open-source tools and products. But they are different enough, that they are not fully compatible, adding a lot of friction to the potential migration process. Now it’s not only switching from one cloud to another but also migrating your data, launching comparable services on a different platform, and also training/hiring new developers. Not to mention the siloed, redundant user identity and access management systems.
The latest iteration of this approach is the serverless architecture or Function as a Service. Again, from a technical point of view, it’s a very interesting innovation. As a business decision, it’s a highly questionable proposition: “Please develop software for us, for free, so that we can rent it back to you and charge whatever we want since migration is almost impossible”.
All of this creates “the impossible cloud trilemma”: at any given point in time you can at most have only 2 out of the following 3: a) flexibility and control, b) convenience, c) low costs. If you do all in-house, you get flexibility, control, and a low total cost of ownership. If you pick a small local provider you’ll probably get low costs. If you pick a trendy PaaS, you’ll only get convenience. Optimizing for flexibility and convenience requires something like GCP + Anthos, which is expensive and still doesn’t give you full control.
These are only the problems on the operational level, related to the cost structure and long term efficiency of capital allocation. But there are also important strategic issues to discuss. Since software has eaten the world, every company is a software company now. The core competitive advantages and processes manifest themselves in the form of software companies’ data acquisition, storage, and processing. Outsourcing these key functions to a very powerful monopolistic provider skews the balance of power and strategic positioning. Companies risk becoming “uber drivers” while the major cloud platforms become “the Uber platform”.
Finally, there is an open question of what happens with the data stored in a public cloud. How can companies know who has access to it and for what purpose? Obviously, high value, high profile (and government) cloud contracts are audited. But what about all the small businesses, startups? How many of these read T&Cs before signing up with a PaaS? (Amazon is already using data of their own 3rd party merchant to design in-house products to compete with them in the Amazon store).
So, is there a solution? And I mean a realistic, technology and market-driven, not a political, Elizabeth Warren style, solution. Here’s one interesting idea presented recently by David Linthicum, Chief Cloud Strategy Officer, Deloitte Consulting:
“In 2020, I believe we’ll see the rise of the “Omni Cloud,” or what multi-cloud will become. Basically, the abstraction above the physical public clouds, providing common ways to access storage, processing, databases, compute, and HPC. This will likely be more of an idea than an actual thing in 2020, but it will be game changing in terms of how we deal with complex heterogenous cloud deployments.”
By abstracting the infrastructure such solutions reduce cloud providers to the role of commodity providers as long as the Omni Cloud environment enables easy migration with near-zero costs and downtime. Companies could create their portable IT environment that becomes their asset that can yield compounded returns and can be moved anywhere, depending on the current best offers on the transparent cloud /hosting market. Companies could choose convenience, flexibility, control, and low costs at the same time.
With Omni Cloud, the company’s IT environment becomes an asset and every piece of code is a reusable building block that can generate compound yield in the future (vs. a liability of technical debt and escalating costs of disposable code and inefficient infrastructure).
Let us know @realDjuno what you think.