The key to multi-cloud success

The following is adapted from an article Caleb wrote for DevOps.com.

In the era of cloud-based architectures, companies have implemented multiple cloud platforms but have yet to reap the full benefits. Whether it’s Amazon Web Services (AWS), Google Cloud, or Microsoft Azure (or some combination thereof), a recent Forrester study found that nearly 86 percent of enterprises have incorporated a multi-cloud strategy. Not only does this strategy take companies out of the business of hosting their own applications, it leads to benefits like avoiding vendor lock-in, reduced costs and optimized performance.

chuttersnap-541390-unsplash Photo by chuttersnap on Unsplash

While it’s clear that a multi-cloud strategy offers many benefits and flexibility for an organization, there are also more moving parts to track (with the added challenge of hybrid, multi-generational infrastructure), making it all the more critical that businesses have a solid monitoring strategy in place. Luckily, the collection of monitoring data is essentially a solved problem; instead, businesses today are faced with an endless cycle of “day two” operational challenges, such as avoiding downtime and maintaining visibility.

As Andreessen Horowitz so aptly put it, software is eating the world; every company is becoming (or has become) a software company. Software is not only ubiquitous, it’s powerful, enabling us to solve a wide range of problems. This emphasis on software within companies has led to the emergence of multiple cloud providers such as Amazon, Google, and Microsoft — who all recognized the trend and seized the opportunity by building cloud computing platforms.

Because companies no longer have to build their own datacenters, they can focus on their core business of delivering value to their customers. With the public cloud, they’re able to achieve far greater time-to-value than they would if left to build their own datacenters and cloud platforms.

Now, companies can build a portable software stack that is DevOps driven, free from vendor lock-in and capable of delivering a superior set of capabilities than can be gained from a single provider. Although we’re now consuming infrastructure from cloud providers (which has its own inherent risks), we have better tooling to enable multi-cloud strategies (e.g., from companies like HashiCorp), minimizing the risk of being reliant on any one provider for cloud services.

The missing piece: multi-cloud monitoring

In this software-dependent world, availability is critical, and downtime is not only expensive but damaging to business reputation. As a result, monitoring systems and applications has become a core competency, crucial to business operations. To fully reap the rewards of a multi-cloud strategy and thrive in this cloud-based world, implementing a unified monitoring solution is critical for success. In addition to the existing benefits multi-cloud offers, a unified solution gives operators constant and complete visibility into their infrastructure, applications, and operations.

Surviving as a modern enterprise

Improved operational visibility through monitoring is often cited as a top priority among Chief Information Officers (CIOs) and senior operations leadership, and good monitoring is a staple of high-performing teams, yet too often it’s implemented as an afterthought in reaction to changes in the mission-critical systems that power businesses. When this happens, organizations can struggle to reap the benefits of multi-cloud because they lack sufficient visibility to detect and avoid problems, or recover from expensive downtime.

Further complicating this underlying challenge is the fact that ephemeral infrastructure platforms such as Kubernetes are the new normal, while digital transformation, cloud migration, DevOps, containerization and other initiatives are compelling movements in the modern enterprise. Although they vary in scope and overlap or intersect in practice, they are unified in purpose: to deliver increased organizational velocity, empowering organizations to ship more changes, faster. While a boon to business initiatives and developer productivity, these practices can exponentially increase the number and duration of “day two” operational challenges. Delaying adoption of the solution to these challenges only increases risk exposure and cost.

Future proof your monitoring

According to Gartner, the number of cloud-managed service providers is expected to triple by 2020. While this is good news for analysts, investors, and operators alike – everyone (except Amazon?) benefits from a competitive market – it suggests that the multi-cloud trend will only become more diverse moving forward. Given the already complex landscape and this forecast, it’s impractical to expect turn-key monitoring solutions to provide sufficient coverage – a different approach is needed.

The good news is that the solution is surprisingly simple: treat monitoring and observability like we do the rest of our DevOps toolchain – as a workflow. When containerization gained in popularity and we incorporated Docker and Kubernetes into our multi-cloud strategy, we didn’t have to replace our CI pipelines, we simply shipped containers instead of RPMs, essentially making our CI tools future proof.

For monitoring and observability, that future-proof solution is the monitoring event pipeline. At the end of the day, there are only so many mechanisms for observing systems (APM and observability client libraries, Prometheus-style /metrics or /healthz endpoints, logs, and good old-fashioned service health checks are a few great examples); once we start to think about these as workflows that can be automated via monitoring pipelines, we’re empowered to continuously adapt and thrive (maintaining visibility and avoiding downtime) in the ever-evolving and increasingly multitudinous cloud world of IT infrastructure.