Cloud technology is a springboard for digital transformation, delivering the business agility and simplicity that are so important to today’s business. Cloud is also a powerful catalyst for improving IT and user experiences, with operating principles such as anywhere access, policy automation, and visibility.

The benefits of cloud for the business, for IT operations, and for employee experiences are clear. But what if you could take the best principles of cloud and apply them across your entire IT infrastructure?

Simpler operations belong everywhere—not just the cloud

There’s no reason that the benefits of cloud need to be limited to the cloud. With the right strategy, platforms, and solutions, organizations can bring the cloud operating model to the network and across the entire cloud and network IT stack. In fact, in a recent IDC study, 60% of CIOs stated they are already planning to modify their operating model to manage value, agility, and risk by 2026.

Transitioning to this new operating model unlocks more benefits for IT leaders, in more environments and use cases. It simplifies operations for on-premises and cloud infrastructures, cutting down the complexity and fragmentation created by disconnected tools and consoles—and the different skill sets needed to work with them.

Expanding the cloud operating model also sets the stage for better collaboration between network, development, and cloud operations. By introducing a common model and language that transcends operational silos, this approach helps reduce points of friction between organizational handoffs.  The result: teams can collaborate and work together to solve problems more smoothly. Processes become more consistent, predictable, and less prone to manual errors.

Bringing the cloud operating model to the network helps your teams execute faster and be more agile. It can automate tasks such as deploying a new distributed application for users in the home and office. For example, with a cloud-managed SD-WAN, a company can establish connectivity and security in about an hour. With a traditional siloed approach, those same steps could take NetOps, DevOps, and SecOps teams days.

Once an application is up and running, the cloud operating model can support greater visibility into cloud and data center operations, application deployment, and performance. When you have improved end-to-end visibility, you can react more quickly. Your teams can troubleshoot faster, tune performance more easily, and enjoy a more intuitive experience as they do it.

When you simplify IT, better experiences and outcomes follow

What happens when the cloud operating model is brought to the network? Organizations gain the benefits of a simplified IT approach and better user experiences. But that’s not all. It also frees IT leaders to focus, innovate, and deliver better business outcomes.

Improving the application experience

Applying the cloud operating model expands visibility, creating an end-to-end view that enables more consistent governance across the infrastructure, from the network to the internet to the cloud, to help ensure a better application experience for every user.

Powering a more agile, proactive business

Making IT more agile ripples across the whole organization. By automating manual processes, you can get out in front of business changes, deploying resources to support new applications, so you can meet changing needs for business stakeholders, faster.

Controlling costs

Expanding a common operating model helps your teams work smarter with consistent management of the deployment, optimization, and troubleshooting lifecycles, both in the cloud and on-premises.

Breaking down silos for productivity

Cloud operating principles can enable consistent governance that helps bring down the barriers between siloed cloud and network teams—and help IT move beyond fragmented operations with different policies and processes.

Applying stronger security everywhere

Cloud consistency can also enhance security. With automation and improved end-to-end visibility, you can build security into every environment and make automated security updates an integral part of all lifecycle management.

Bring the best of the cloud across your infrastructure

There’s no “one size fits all” approach to a cloud operating model. It needs to be designed and tailored to align with each organization. With the right strategy, platforms, and services, you can take a big step toward simplifying IT to deliver unified experiences and improved business agility.

Discover how.

Digital Transformation

Heading down the path of systems thinking for the hybrid cloud is the equivalent of taking the road less traveled in the storage industry. It is much more common to hear vendor noise about direct cloud integration features, such as a mechanism to move data on a storage array to public cloud services or run separate instances of the core vendor software inside public cloud environments. This is because of a narrow way of thinking that is centered on a storage array mentality. While there is value in those capabilities, practitioners need to consider a broader vision.

When my Infinidat colleagues and I talk to CIOs and other senior leaders at large enterprise organizations, we speak much more about the bigger picture of all the different aspects of the enterprise environment. The CIO needs it to be as simple as possible, especially if the desired state is a low investment in traditional data centers, which is the direction the IT pendulum continues to swing.

Applying systems thinking to the hybrid cloud is changing the way CIOs and IT teams are approaching their cloud journey. Systems thinking takes into consideration the end-to-end environment and the operational realities associated with that environment. There are several components with different values across the environment, which ultimately supports an overall cloud transformation. Storage is a critical part of the overall corporate cloud strategy.

Savvy IT leaders have come to realize the benefits of both the public cloud and private cloud, culminating in hybrid cloud implementations. Escalating costs on the public cloud will likely reinforce hybrid approaches to storage and cause the pendulum to swing back toward private cloud in the future, but besides serving as a transitional path today, the main reasons for using a private cloud today are about control and cybersecurity.

Being able to create a system that can accommodate both of those elements at the right scale for a large enterprise environment is not an easy task. And it goes far beyond the kind of individual array type services that are baked into point solutions within a typical storage environment.

What exactly is hybrid cloud?

Hybrid cloud is simply a world where you have workloads running in at least one public cloud component, plus a data center-based component. It could be traditionally-owned data centers or a co-location facility, but it’s something where the customer is responsible for control of the physical infrastructure, not a vendor.

To support that deployment scenario, you need workload mobility. You need the ability to quickly provision and manage the underlying infrastructure. You need visibility into the entire stack. Those are the biggest rocks among many factors that determine hybrid cloud success.

For typical enterprises, using larger building blocks on the infrastructure side makes the journey to hybrid cloud easier. There are fewer potential points of failure, fewer “moving pieces,” and increased simplification of the existing hybrid or existing physical infrastructure, whether it is deployed in a data center or in a co-location type of environment. This deployment model also can dramatically reduce overall storage estate CAPEX and OPEX.

But what happens when the building blocks for storage are small – under a petabyte or so each? There is inherently more orchestration overhead, and now vendors are increasingly dependent on an extra “glue” layer to put all these smaller pieces together.

Working with bigger pieces (petabytes) from the beginning can omit a significant amount of that complexity in a hybrid cloud. It’s a question of how much investment a CIO wants to put in different pieces of “glue” between different systems vs. getting larger building blocks conducive to a systems thinking approach.

The right places in the stack

A number of storage array vendors highlight an ability to snap data to public clouds, and there is value in this capability, but it’s less valuable than you might think when you’re thinking at a systems level. That is because large enterprises will most likely want backup software with routine, specific schedules across their entire infrastructure and coordination with their application stacks. IT managers are not going to want an array to move data when the application doesn’t know about it.

A common problem is that many storage array vendors focus on doing it within their piece of the stack. Yet, in fact, the right answer is most likely at the backup software layer somewhere − somewhere higher than the individual arrays in the stack. It’s about upleveling the overall thought process to systems thinking: what SLAs you want to achieve across your on-prem and public cloud environments. The right backup software can integrate with the underlying infrastructure pieces to provide that.

Hybrid cloud needs to be thought of holistically, not as a “spec checkbox” type activity. And you need to think about where the right places are in this stack to provide the functionality.

Paying twice for the same storage

Solutions that involve deploying another vendor’s software on top of storage that you already have to pay for from the hyperscaler means paying twice for the same storage, and this makes little sense in the long term.

Sure, it may be an okay transitional solution. Or if you’re really baked into the vendor’s APIs or way of doing things, then maybe that’s good accommodation. But the end state is almost never going to be a situation where the CIO is signing off on a check for two different vendors for the same bits of data. It simply doesn’t make sense.

Thinking at the systems level

Tactical issues get resolved when you apply systems thinking to enterprise storage. Keep in mind:

Consider where the data resiliency needs to be orchestrated and whether that needs to be within individual arrays or better positioned as part of an overall backup strategy or whatever strategy it isBeware of just running the same storage software in the public cloud because it’s a transitional solution at bestCost management is critical

On the last point, you should have a good look at the true economic profile your organization is getting on-premises. You can get cloud-like business models and the OPEX aspects from vendors, such as Infinidat, lowering costs compared to traditional storage infrastructure.

Almost all storage decisions are fundamentally economic decisions, whether it’s a direct price per GB cost, the overall operational costs, or cost avoidance/opportunity costs. It all comes back to costs at some level, but an important part of that is questioning the assumptions of the existing architectures.

If you’re coming from a world where you have 50 mid-range arrays, and you have a potential of reducing the quantity of moving pieces in that infrastructure, the consolidation and simplification alone could translate into significant cost benefits: OPEX, CAPEX, and operational manpower. And that’s before you even start talking about moving data outside of more traditional data center environments.

Leveraging technologies, such as Infinidat’s enterprise storage solutions, makes it more straightforward to simplify and consolidate on the on-prem side of the hybrid cloud environment, potentially allowing for incremental investment in the public cloud side, if that’s the direction for your particular enterprise.

How much are you spending maintaining these solutions, the incumbent solutions, both in terms of your standard maintenance or support subscription fees? Those fees, by the way, add up quite significantly. In terms of your staff time and productivity to support 50 arrays, when you could be supporting three systems or one system, you should look holistically at the real costs, not just what you’re paying the vendors. What are the opportunity costs of maintaining a more complex traditional infrastructure? 

On the public cloud side of things, leveraging cloud cost management tools, we’ve seen over a billion dollars of VC money that’s gone into that space, and many companies are not taking full advantage of this, particularly enterprises who are early in their cloud transformation. The cost management aspect and the automation around it − the degree of work that you can put into it for real meaningful financial results − are not always the highest priority when you’re just getting started. And the challenge with not baking it in from the beginning is that it’s harder to graft it in when processes become entrenched.

For more information, visit Infinidat here

Hybrid Cloud

Skills shortages have always been a fact of life for CIOs. Today, however, the labour market is tighter than ever, and CIOs face multiple challenges.

Yet a recent survey by Pluralsight suggests that IT employers could do more to retain staff by upskilling them so that they can take on bigger and better roles.

Here’s how the situation looks from the perspective of the IT professionals surveyed. Four out of 10 (37%) IT professionals are not very confident that their tech skills are being used to their fullest potential. Three out of 10 (29%) don’t feel confident that their current job provides opportunities for growth. IT professionals also say they encounter barriers to upskilling:

47% say they are too busy and that other demands on their time prevent learning.33% identify employer cost constraints27% blame a “distracting work environment”

If this all sounds somewhat negative, let’s look at the potential upside of upskilling existing employees:

Employees now typically regard opportunities to learn and grow on the job as the top driver of a positive work culture. This, in turn, makes employees happier in their work and drives them to recommend working for their employerImproving the skills of existing employees is cheaper than trying to hire new talent (new external hires tend to cost up to 20% more than upskilled workers, they receive lower performance evaluations in the first two years, and they have higher exit rates)McKinsey estimates that effective reskilling results in productivity gains of between 6% and 12%In general, employers believe that offering short courses to learn new skills has a bigger positive impact on their business than other options including longer courses or hiring apprentices.

Upskilling existing employees is a powerful response to tight labour market conditions. Among the large enterprises focusing on home-grown talent is BT, which has invested heavily in helping its IT staff develop skills fit for the cloud and agile development.

Notably, BT’s upskilling modules are concise, which helps to get around objections based on employees being “too busy”. As Deepak Channa, BT’s erstwhile director for QA and Test at BT, puts it: “The moment people see that a development session is an hour, they switch off. We wanted a learning solution that could be used in a 10-minute break.”

One of the more intriguing conclusions in Pluralsight’s research is that 52% of IT professionals consider leaving their job at least once a month.

You might say that thinking about something isn’t the same as doing it. But the evidence suggests that employees do act on their thoughts. On the basis of a survey of 2,000 employees, for example, LinkedIn reports that those who feel their skills are not being used well are ten times more likely to be looking for a new job.

Faced with odds like this, it’s clear that CIOs need to think seriously about whether they are making the most of the talent they already employ.

Staff Management