These days, to serve the backbone corporate needs for more than 100,000 employees globally means betting big on the cloud.

That’s what James Hannah, SVP and global CIO of General Dynamics Information Technology, has done in partnership with the Reston, Va.-based aerospace and defense contractor’s 10 business units, each of which has its own CIO who works autonomously to make decisions about each division’s use of digital technologies for its unique business.

And the results are truly multicloud, as Hannah has opted to work with all the top cloud vendors to fill the company’s various back-office needs — AWS, Microsoft Azure, Google Cloud Platform, and Oracle Cloud — as well as Workday for HR and other SaaS vendors for specific needs. GDIT is now 100% on the cloud, having closed its final brick-and-mortar data center at the end of last year.  

“We’ve gone through our digital transformation already and migrated all of our application workloads into either an IaaS or SaaS environment,” says Hannah, whose focus is primarily on corporate systems, leaving each of GD’s other business unit open to make their own selections. “They’re free to go to whatever cloud they need to meet the needs of their customers,” he says.

Still, the 10 units are not all islands. Hannah’s IT division collaborates with and serves the needs of its “sister” business units where it makes sense, such as hosting financial applications for some business units. And there are overarching digital technologies that traverse General Dynamics’ business portfolio,  such as security, toward which all units are working to implement zero trust across the board.

But Hannah is clear about his mission, which is to provide critical services to the employees who serve GDIT’s high-level customers within the US government’s military-industrial complex and partners around the globe. It is not a candy store.

And in doing so, GDIT’s full cloud migration, which started pre-pandemic, is paying off nicely.  

Laying the multicloud foundation

When the IT division started its digital transformation, Hannah and his team performed a thorough assessment of General Dynamics’ corporate workloads to determine which cloud would be best based on functionality. As part of that process, integrations with other systems and applications were taken into consideration to avoid workloads “traversing from cloud to cloud” or “bouncing all over,” Hannah says.

“I think that the clouds are quite good. We saw a lot of reduction in cost,” he says. “We were able to get better metrics and reporting. And it increased or strengthened our DR [disaster recovery] posture overall.”

The next move, Hannah says, is to delve deeper into how GDIT can evolve more corporate assets into cloud-native, virtualized applications that can be optimized for the scalability, flexibility, and cost savings of its 100% multicloud infrastructure. Hannah’s team is also constantly learning how to strengthen and shift workloads to optimize performance and, in some cases, move workloads from IaaS to SaaS when it makes sense.

“That’s part of the evolution to the cloud,” he says. “You’re not going to be in a constant state of transformation. For me, it’s more of an evolution, assessing workloads and making sure they are still where they need to be.” 

GDIT has also automated many tasks within its finance systems such as accounts payable for inter- and intra-company transfers as well as for HR and IT business areas.

None of this is surprising for an IT division of a major enterprise these days, and GDIT is big — roughly 30,000 IT employees tend to General Dynamics’ corporate needs.

Skilling up and battening down

General Dynamics’ overall CTO leadership group is looking at generative AI and the implications and governance around it and how it could be potentially used with customers, Hannah says. But for a defense contractor — which manufactures nuclear submarines, aerospace systems, and combat systems, among other defense units — it is a very complex operation that has just begun, he adds.

Still, the CIO has made use of machine learning models available from one of its cloud providers to train employees for the rapidly evolving digital era and impart upward mobility within GDIT. The initiative is part of GDIT’s Career Hub, which provides employees with training recommendations around skills and certifications to help level up their careers, Hannah says.

“Since going live with that AI modeling capability, we’ve seen about a 30% increase in internal applications driven directly from the Career Hub,” he says.

Employees simply upload their resume or LinkedIn profile to Career Hub and the AI recommends current job openings, similar to the way Netflix makes movie recommendations, the CIO says. It also ties into the company’s learning and development system, providing skills and certification training recommendations that will help employees reach job openings they may not have thought of as suitable because they may presently have only 80% of the required skills.

Hannah is also deploying automation for lower-level repetitive tasks, freeing up GDIT employees to work on more complex tasks, such as rolling out automation within finance to enable speedier metrics, for example. In this way, GDIT’s use of automation helps employees continuously gain skills that not only allow greater efficiencies for the company but greater mobility for IT employees.

But if there’s one thing that keeps Hannah up at night, it’s security, which is pivotal for any enterprise, but especially a defense contractor. GDIT and all 10 business units are waiting for executive orders and guidance as part of a three-year security program currently under way. Still, cybersecurity remains Hannah’s primary focus now and over the next 12 months even as the top brass work on the comprehensive security plan.

“The focus is on transforming and evolving the cyber tools that we have … that’s the primary focus with the threats in this environment,” Hannah says. “We’re always under the watchful eyes of bad actors throughout the world. Being part of a group that always has a target on your back means you need to make sure you’re always looking at all the technologies available to improve your cyber posture as you move forward.”

Gartner analyst Daniel Snyder says the US government and military is relying heavily on partnerships with defense contractors such as General Dynamics to transform. 

“The Department of Defense relies on thousands of networks that are vital to execute its mission. Over the course of the past few decades, the development process has resulted in layers of stove-piped systems that are difficult to integrate,” he says, noting that as part of its digital transformation strategy, the DoD is overhauling its IT infrastructure to leverage the cloud.

“Much of the future success is hinging on the support of its industrial base with systems integrators such as General Dynamics, Leidos, Raytheon, and Northrop Grumman,” he says.

Aerospace and Defense Industry, Cloud Computing, IT Strategy, Multi Cloud

Italian insurer Reale Group found itself with four cloud providers running around 15% of its workloads, and no clear strategy to manage them. “It was not a result we were seeking, it was the result of reality,” said Marco Barioni, CEO of Reale ITES, the company’s internal IT engineering services unit.

Since then, Barioni has taken control of the situation, putting into action a multi-year plan to move over half of Reale Group’s core applications and services to just two public clouds in a quest for cost optimization and innovation.

Multicloud environments like Reale Group’s are already the norm for 98% of infrastructure-as-a-service or platform-as-a-service users — although not all of them are taking control of their situation the same way Barioni is.

That’s according to a new study of enterprise cloud usage by 451 Research, which also looked at what enterprises are running across multiple public clouds, and how they measure strategy success.

Two-thirds of those surveyed are using services from two or three public cloud providers, while 31% are customers of four or more cloud providers. Only 2% had a single cloud provider.

Those enterprises’ cloud environments became even more complex when taking into account their use of software-as-a-service offerings. Half of those surveyed used two to four SaaS providers, one-third used five to nine providers, and one-eighth used 10 or more. Only 4% said they used a single SaaS solution, no mean feat given the prevalence of Salesforce, Zoom, and online productivity suites such as Microsoft 365 or Google Workspace.

The study, commissioned by Oracle, looked at the activities of 1,500 enterprises around the world using IaaS or PaaS offerings, or planning to do so within the next six months. The research was conducted between July and September 2022.

Three years on from the first COVID-19 lockdowns, it’s clear the pandemic was a significant driver of multicloud adoption for 91% of those surveyed. But now that the immediate necessity of the switch to remote operations and remote management has passed, enterprises are seeking other benefits as they build their multicloud environments.

Why build a multicloud infrastructure?

The two most frequently cited motivations for using multiple cloud providers were data sovereignty or locality (cited by 41% of respondents) and cost optimization (40%). Enterprises in financial services, insurance, and healthcare were most concerned about where their data is stored, while cost was the biggest factor for those in real estate, manufacturing, energy, and technology.

Next came three related concerns: business agility and innovation (30%); best-of-breed cloud services and applications (25%); and cloud vendor lock-in concerns (25%). Going with a single cloud provider could prevent enterprises from accessing new technology capabilities (such as the much-hyped ChatGPT, which Microsoft is using to draw customers to its Azure cloud services), leave them with a second-best service from a cloud provider less invested in a given technology, or allow the provider to hold them hostage and raise prices.

Traditional benefits of duplicating IT infrastructure were least important, with greater resiliency or performance cited by 23% of respondents, and redundancy or disaster recovery capabilities by just 21%.

But there are still many factors holding back multicloud adoption in the enterprise. Cloud provider management was the most frequently cited (by 34% of respondents), followed by interconnectivity (30%). It was a tie for third place, with data governance issues, workload and data portability, regulatory compliance, and ensuring security across public clouds all cited by 24%.

“The degree to which benefits outweigh challenges may depend on whether multicloud is part of a broader IT transformation strategy … or the extent to which it addresses particular cost, organizational or governance concerns,” wrote Melanie Posey, author of the study. Simply having multiple public cloud environments to meet different users’ needs may be good enough for risk mitigation and cost arbitrage for some enterprises, she wrote, while others will want integrated environments in which workloads and data can run across multiple public clouds.

Reality bytes

Reale Groupe is still straddling those two states as IT leader Barioni moves the company from relationships with four hyperscalers that just happened toward a greater reliance on two that he chose.

His choice of clouds — Oracle’s OCI and Microsoft’s Azure — was constrained by Reale’s reliance on Oracle’s Exadata platform. “Our core applications all run on Oracle databases,” he said.

While several cloud providers offered the packaged services for machine learning and advanced process management he was looking for, the choice of Microsoft to host the remaining business applications came down to latency, he said. Oracle and Microsoft have closely integrated their infrastructure in the regions most important to Reale, allowing the company to build high-speed interconnects between applications running in each cloud. Reale will move its first integrated applications to the cloud in March 2023, he said.

Multicloud management

Johnson Controls is further along in its multicloud journey. It makes control systems for managing industrial processes and smart buildings, some of which can be managed from the cloud-based OpenBlue Platform run by CTO Vijay Sankaran. He said that, while the company has a primary cloud provider, it has chosen to architect its platform to operate across multiple clouds so it can meet its customers where they are.

That multicloud move has meant extra work, connecting everything to a common observability platform, and ensuring all security events feed up to a single, integrated virtual security operations center so that the various clouds can be monitored from a single pane of glass, he said. While the overhead of adding more cloud providers is to be expected, the same problem exists even when dealing with a single hyperscaler, as different regional instances may have specific controls that need to be put in place, he added.

The study also asked enterprises what key outcomes they expected from a multicloud management platform. Only 22% cited the single pane of glass that Sankaran relies on. The top responses were cloud cost optimization (33%), a common governance policy across clouds and integration with on-premises infrastructure (both 27%), improved visibility and analytics (26%), and integration with existing toolsets (25%).

Cost control

Whether an enterprise chooses to spread its workloads across more public clouds or concentrate them on fewer, it all seems to come back to managing cost.

Reale Group’s Barioni has a plan for that involving a core team with a mix of competencies: some technology infrastructure experts, and some with a deep knowledge of accounting. Developers tend to aim for the best technical solution, which is often not the most cost-efficient one, he said.

When applications run on premises, computing capacity — and therefore cost — is limited by what the data center can hold, whereas there are few limits on the computing capacity of the cloud — or its cost. Bringing together the technically minded and financially minded will help Barioni balance cost and performance in this new, unconstrained environment. “Every day, you have to take decisions on prioritizing your workloads and deciding how to optimize the computing power you have,” he said. “It’s a completely new mindset.”

Multi Cloud

In a 2021 survey, 95% of respondents agreed that a hybrid cloud is critical for success, and 86% plan to invest more in hybrid multicloud.

Hybrid multicloud has emerged as the new design center for organizations of all sizes. Rather than purchasing costly infrastructure upfront to accommodate future growth, the hybrid multicloud helps you scale up and down as needed and right-size your environment. Deploying data and workloads in this model offers the potential for incredible value, including improved agility, functionality, cost savings, performance, cloud security, compliance, sustainability, disaster recovery—the list goes on. 

However, enjoying the benefits of hybrid multicloud requires organizations to first overcome a variety of challenges. I’ll share some of these challenges as well as practices and recommendations to help your organization realize the full value of your investment.

Challenge 1: Mindset

The cloud isn’t as much a place to go as it is a way of operating. When organizations move from on-premises to hybrid multicloud, it requires a shift in mindset and protocols—an important concept for organizations to embrace. Many of the tools, skillsets, and processes used on-premises must evolve to those used in the cloud. Your applications may need to be refactored. In a word, your organization must adapt its way of operating to maximize the value of hybrid multicloud.

Challenge 2: Compliance

Compliance poses another challenge. Wherever your organization puts data, it must comply with industry regulations. Moving data later can rack up expensive egress charges. In advance, your organization must carefully consider where data needs to reside physically and how you will ensure compliance, maintain visibility, and report on your compliance posture.

Challenge 3: Security

The same is true for cloud security, which is always top of mind for organizations. Your organization must make security as robust as possible across storage, network, compute, and people—essentially every layer. This means that if you’re operating under zero-trust policies, you need to understand how that impacts your hybrid multicloud model.

Challenge 4: Cost optimization

While hybrid multicloud can be incredibly cost-effective, understanding and managing costs across providers and usage can prove incredibly complex. Make sure that by design, you’re addressing cloud cost optimization challenges upfront, narrowing the focus to minimize complexity while ensuring interoperability. Implement cloud FinOps tools and processes to maximize your investments by enabling broad visibility and cost control across hybrid multicloud. When evaluating cloud provider lock-in, tread carefully to ensure it supports your strategy as a business.

Challenge 5: Disaster recovery

Organizations often see disaster recovery as the low-hanging fruit of the hybrid multicloud journey because it eliminates a second data center full of depreciating and idle equipment. Because the way your organization handles disaster recovery changes, you may choose to extend the products you already have. You might add new approaches and tooling. Regardless, you need a plan in place before you make this transition.

Challenge 6: Dependencies

Understanding and addressing workloads and dependencies across your infrastructure is fundamental to minimizing the risk of issues and outages. Previous methodologies may not apply in hybrid multicloud, especially when it comes to common cloud attributes such as services and self-service automation. That means you must complete application services dependency mapping as part of assessment and planning activities. This work includes determining which applications need to be refactored or modernized to achieve performance objectives and operate efficiently.

Challenge 7: Skillsets

Not surprisingly, the skillsets required to support hybrid multicloud differ from those needed to support a traditional on-premises environment. Ensuring your organization has the right skillsets to support this work can be challenging. Therefore, it’s essential to understand the necessary toolsets and skills so you can put a plan in place for addressing training gaps and potentially supplementing staff.

Accelerate your hybrid multicloud journey

Moving to hybrid multicloud is a highly complex endeavor that when done well, can pay off in spades for your organization. A successful journey requires careful, detailed planning that takes these and other challenges into account. The more challenges you solve on the front end, the faster and more effective your transition will be on the back end.

GDT has been accelerating customer success for more than 26 years, helping countless customers streamline their hybrid cloud journeys. Our experts provide architecture, advisory, design, deployment, and management services, all customized to your specific needs, providing you a secure and cost-effective infrastructure that can flex and scale as business requirements change.

Contact the experts at GDT to see how we can help your business streamline your hybrid multicloud journey.

Multi Cloud

Is the move to cloud a modern gold rush?

This seems to be the case for many organizations as they embark on a cloud strategy to support their business goals. But there are pitfalls along the way: the cloud is, after all, simply an enabling technology and not a solution in itself.

Organizations are increasingly taking a considered approach to the adoption of Amazon Web Services, Microsoft Azure or Google Cloud Platform.

They want to innovate to create new applications, get things to market faster and be more competitive. Yet security demands, skills shortages and cost challenges, along with high levels of application complexity, tend to hold them back.

When multiple applications are deployed on one or more cloud platforms, it’s difficult to keep track of the volume of resources and frequency of change within the organization’s environment securely and cost-effectively.

How do they keep track to avoid rogue spending? Who’s got their hand on the switch to turn off the services they don’t need at specific times?

Another common challenge is maintaining the specialist skills needed to manage multiple cloud technologies along with legacy corporate data centers and traditional applications.

In a 451 Research report on enterprise transformation, 30% of respondent organizations agreed that they lacked the expertise needed to manage cloud platforms.

A new platform for charting the way forward

At NTT, we advise our clients on bringing together and managing all the components – cloud platforms, infrastructure and software – that they need to deliver their desired outcomes.

But we also support the optimal execution of their strategy with our Adaptive Cloud to Edge Platform. In short, we help our clients to choose the best execution venue for their workload and then deploy, operate and optimize their applications in the cloud.

The platform brings together our 20 years of cloud-management experience and ambition to innovate without compromise.

By enabling AIOps, it delivers real-time analytics, automation, observability, security and service-delivery integration across the multicloud environment. It allows us to orchestrate and automate activities that drive business outcomes.

Built to control costs and meet compliance requirements

The platform makes our services more efficient, cost-effective, automated and secure across disparate cloud technologies, which in turn enables us to better meet our clients’ compliance and cost-control needs.

It also comes with built-in guardrails, allowing our clients a level of flexibility or self-service without introducing risk into what they’re doing or bypassing their governance and security requirements.

Efficient delivery using infrastructure as code

Our clients want a more streamlined, automated and reliable approach to delivering their solutions using infrastructure as code, and our platform supports that.

It’s not simply adding a layer of abstraction; it’s designed to let clients access their resources as efficiently as possible.

They can release their code using our platform, to be deployed through a managed process.

This means they can free up valuable resources to focus on development while increasing their velocity of software delivery.

Visibility, control and governance across the board

The platform is the heart of our Multicloud as a Service offering because it provides visibility, control and governance across all clouds and for all workloads.

It enhances the cloud providers’ native control planes with AI-backed insights for anomaly detection, correlation forecasting, automated operations, agile deployments and more, without limiting direct access to the cloud.

These elements give organizations more comfort in consuming these services in a way that is closely aligned with their needs.

The value of a managed service

Essentially, we enable our clients to innovate by deploying, operating and monitoring applications with speed and efficiency across their choice of cloud technologies.

This can be difficult for many clients to do themselves because most have managed their technology in a particular way for years and now have to make a step change into the cloud paradigm. But NTT has operated cloud platforms and delivered managed services across multiple industries and technologies for more than two decades, so we’re perfectly placed to help them make the leap.

Some of the components of our platform may be familiar, but how we bring them together is unique. Our many years of operating experience have been baked into this platform to make it a true differentiator.

That’s the value of a platform-enabled managed service: you’ll get things done quicker with high proficiency – and at a lower cost – because you’re getting access to a robust product driven by proven expertise and tailored to accelerate your organization’s digital transformation.

Read more about our Adaptive Cloud to Edge Platform.

George Rigby is Vice President of Go-to-Market: Managed Cloud and Infrastructure Services at NTT

Multi Cloud

Magna International has made a big splash of late displaying the highly anticipated Fisker Ocean SUV electric vehicle (EV) and a pilot of a pizza delivery robot at trade shows.

But you won’t see the company brand anywhere on any vehicle. The Aurora, Ontario- and Troy, Mich.-based company, founded 60 years ago as an automobile supplier for the Big 3 in Detroit, does it all for auto dealers on both sides of the Atlantic Ocean.

The advanced automotive multinational company — which describes itself as a mobility technology company — got its humble start making brackets for the sun visor in GM vehicles. Today, Magna employs 170,000 and generates almost $37 billion annually providing contract assembly services and manufacturing advanced driver assistance systems (ADAS), automated seating, bogs and chassis, powertrain systems, as well as a multitude of mechatronics, digital imaging radars and sensors, body exteriors, and yes, advanced lighting and mirrors, too.

Magna, for instance, has built 3.7 million vehicles for OEMs, including the ePace for Jaguar, and is putting the final touches on the Fisker Ocean SUV EV, based on a modified version of Magna-developed EV platform that powers Fisker’s FM29 platform for renowned automobile designer Henrik Fisker.

But one thing is clear: Magna has no intention of entering the automobile industry. Contract assembly is simply part of its DNA. “In terms of automotive, sometimes it’s easier to describe what we don’t do than what we do,” quips Boris Shulkin, senior vice president and chief digital and information officer for Magna, who has held various positions in the company during his 20-year tenure, including EVP of technology and investments, SVP of technology and development, and VP of R&D.

“What makes us unique is that we’re able to design and manufacture vehicles for our customers but that’s not all that we are about,” he says.

Here, information technology plays a key role. The company’s ongoing digital journey, in tight partnership with OEMs and multiple cloud providers, has been expanding and transforming every aspect of Magna’s business and manufacturing processes for many years.

Accelerating innovation in the cloud

“We are very much cloud-native today right across the enterprise,” says Shulkin, emphasizing that data collection and analysis are core business processes for the company’s complex system development, prototyping, and manufacturing lines.

Magna ignited its migration to a hybrid, multicloud infrastructure roughly six years ago based on partnerships with AWS, Microsoft, and Google.

“We manage that part of the cloud infrastructure with our suppliers and our partners in a very hybridized approach,” Shulkin says, noting that some of the data is stored in private clouds and some in the public cloud.

Whether developing its advanced driver assistance systems, powertrain or chassis systems, energy storage systems, or LIDAR sensors and radars, there is a massive amount of data collection, testing, and validation required for millions of miles driven by prototype and production vehicles in all weather conditions and terrain.

There is so much data, it typically comes in petabytes — and that means an old-school approach to data transfer.

“As we’re collecting the data, we’re shipping the data [to our cloud providers] by FedEx,” says Shulkin, who oversees 500 employees in Magna’s global IT staff and 1,400 technology contractors spread across six business units globally. “For the amount of data we’re collecting daily, believe it or not, it’s easier and faster to send a hard drive from a data location to the private cloud than to ship it over the dedicated connection to the cloud. The throughput of most capable modern networks is not big enough.”

Magna has developed its data pipeline and tool chains in house and in concert with its cloud partners. “It is the digital plumbing,” he says. “It’s an enabler for operational efficiency.”

Using data design and analytics platforms, Magna engineers build prototypes of subsystems, parts, and vehicles electronically — rather than physically. “It’s about building the digital twins,” Shulkin says. “The ability to use the data proactively as opposed to reactively. That is where the business value comes from.”

Just as it has for Magna, the cloud has been a key enabler of innovation for a wide array of companies, says Gartner analyst Mike Ramsey.

“The cloud helps companies make use of a huge volume of data and run analytics and advanced engineering techniques leveraging massive data centers rather than overtaxed on-site computers,” he says. “It also helps them collaborate around the world, speeding up innovation and allowing for 24/7 development. Collaboration and the ability to scale up quickly for huge computational requirements is an immense value for cloud.”

Data lake as fuel for innovation

Magna is in the process of building what it dubs its enterprise digital platform — a data lake to address its big data problem. In conjunction with its cloud partners, the company will use commercial products and tools such as Snowflake to establish a massive, universal data pool that can be tapped into by all its enterprise employees.

But most important, Shulkin says, is creating and managing standardized interfaces that allow all business units globally to employ the data they need.

“What we’re in the process of doing is creating standard interfaces between all of this in order to enable people to have seamless access to it,” he says. “Creating the interfaces allows many employees to create the digital twin without changing 50 ERP systems overnight.”

Data governance is another important aspect of Magna’s enterprise digital platform, which is used globally and subject to various rules and regulations by providers and officials in each nation. Streamlining that is critical to efficiencies.

“If I had to summarize the enterprise digital platform, it’s about enabling operational efficiency, improving the bottom line, and putting trusted data at the fingertips of decision makers as early as possible so they can make proactive decisions,” Shulkin says.

Magna is enjoying the fruits of its technical mastery in myriad ways, from production efficiency to its status as No. 1 in sales in North America and a stellar reputation that led Henrik Fisker to select the company to produce his Ocean SUV, due to debut within months.   

The company is reluctant to detail that forthcoming contract assembly pact with Fisker. But Shulkin is eager to discuss the extensive management of data, digital tools, and digital processes Magna employs to create digital twins and prototypes of next-generation EVs and mobile systems.  

“Because we’re dealing with the size of clouds where it starts in petabytes, and some of it in private clouds and some of its in public clouds, Magna is the one that manages it through both the development, testing, and validation,” he says, adding that enterprise CIOs with similar big data challenges must take the driver’s wheel to ensure that all the technical and governance requirements related to their complex hybrid and multicloud enterprises are properly handled.

Automotive Industry, Cloud Computing, Hybrid Cloud, Multi Cloud