IT leaders today are facing more challenges than ever before. As you look to shape your winning strategies, the rules of the game keep changing. Environments are more dispersed and dynamic, with attack surfaces and vectors expanding, and new threats emerging. Applications are no longer confined to desktops and devices but are spread across multiple clouds. Work models are evolving as people are more mobile and workplaces become more distributed.

Seeing and protecting people, places, and things in highly distributed and dynamic environments can be daunting. Today’s security stacks have become a complex patchwork of point solutions from many vendors – an issue dubbed, “tool sprawl.”

As you work to overcome these challenges, your users’ expectations keep rising. Assured application performance is a requirement. IT teams must therefore deliver more bandwidth and less latency—against a backdrop of stakeholder demands for service innovation.

At larger organizations in particular, IT functions are split by discipline, such as network operations or security operations. Each of these teams has its own goals, tools, processes, and expertise. This means collaboration is slow, innovation lags, and tackling organizational issues is difficult. According to a recent study, almost half of business leaders (48%) see competing priorities between teams as top roadblocks to collaboration.

For IT leaders, it all adds up to an environment that’s more complex, less predictable, and harder to scale.

Moving beyond silos

The answers to the myriad challenges facing IT teams today cannot be confined to a single technology. The solution cannot be either. It calls for an approach that brings technologies closer together operationally—one that empowers teams to scale up fast, deliver more bandwidth and better application performance, while assuring security everywhere. It’s an architectural evolution based on converging networking and security and enabled by applying the principles of the cloud operating model.

Secure Access Service Edge (SASE) is a compelling example of this convergence. We can see its potential in a law firm with 200 employees and multiple offices that was embracing hybrid work but needed to keep its partners connected to the tools essential for serving clients. The firm wanted to move away from on-premises systems because its headquarters is located in a hurricane-prone region. Migrating workloads to the cloud offered more business resilience, but the solution had to meet strict privacy requirements. It also had to be simple to deploy and manage.

The law firm chose a SASE architecture, which converges networking and security into a cloud-centric service to support secure, seamless connectivity for anytime, anywhere access to any user, device, or location. The migration happened in hours and was handled by staff without deep engineering expertise. Now, users can securely access the legal applications they need from anywhere, and onboarding remote workers is ten times faster for their IT teams than it was before.

Convergence drives competitive edge

Converging networking and security in the cloud is a powerful way to help enterprise IT leaders stay ahead of change. But convergence also offers service providers a powerful route to keeping competitive.

Innovations such as routed optical networking mean service providers can converge IP and optical layers for more efficient operations. A major service provider in Ethiopia is using routed optical networking to drive down network costs and simplify network management while delivering high-speed internet to businesses where infrastructure had been lacking.

Routed optical networking means the service provider can fast-track planning, design, and activation to scale out new services rapidly—all while reducing the number of devices in the network to optimize fiber capacity and increase availability and resiliency.

As a result, this service provider will be Ethiopia’s first ISP to offer a single package that includes high-speed broadband internet access, IPTV, and voice services.

It’s time to unlock the potential of convergence

Nonstop change is a given. But today’s IT challenges also create an opportunity to build a simpler, more consistent environment that’s easier to secure and manage. Converging networking and security operations to align more closely with the infrastructure enables IT to respond to disruption more quickly and proactively. It can also provide a springboard for improved innovation, seamless user experiences, and better business outcomes.

With a converged approach to networking and security, enabled by cloud operating principles, CIOs can build a more collaborative, agile organization that will thrive in an unpredictable landscape.

See how to simplify IT and stay one step ahead in an ever-changing world.

Digital Transformation

If digital transformation was about driving fundamental change within the company, then its next chapter will be far more outward-looking. This is about being digital-first: to build digital businesses that are viable and sustainable in the long term. Rather than just leveraging digital technology to seize new opportunities, such organisations are poised to create operating models for meeting evolving customer needs. In fact, 95% of CEOs globally already see the need to adopt a digital-first strategy.

But what does it mean to be a digital business? Firstly, digital businesses embrace a digital-first enterprise strategy led by the CEO and other C-suite executives. They use technology to stay competitive, shifting their priorities from just driving efficiency. They are fixated on delivering business outcomes at scale with digital innovation programs. They create value through digital technologies.

This change can be seen in how most Asia/Pacific Japan (APJ) CIOs are taking up the role of strategic advisor and partner, collaborating with their business counterparts in operations and product development. And with revenue generation becoming an integral part of the CIO’s breadth of responsibilities, it’s clear that technology is taking on a leading role in value creation.

More and more businesses today have taken root in a digital business model to serve as their stepping stone towards becoming Future Enterprises. The Future Enterprise is the vision of International Data Corporation (IDC) on how enterprises should operate and invest not only to achieve measurable business goals and outcomes, but to participate in the new era of digital businesses. This is where forward-thinking organisations will thrive by attracting digital talent, improving enterprise intelligence, scaling digital innovation, and more.

To celebrate and recognise the APJ leaders and businesses who have challenged themselves to become a digital business, IDC has launched anew the APJ IDC Future Enterprise Awards. Last year, stand out winners include companies and individuals such as:

Midea Group (China), Future Enterprise of the Year, for deploying AI and advanced digital technologies to enhance its user experience with an end-to-end value chain while providing digital empowerment for all employees and partners to create a flexible and labour- and energy-efficient supply chain.Maria Beatriz A. Adversalo of Malayan Insurance Co., Inc. (Philippines), recognized as CIO of the Year in Asia/Pacific, for leading the company’s digital transformation program strategy which includes the deployment of web apps, portals, APIs, OCR, RPA, analytics, and cloud resulting in increased digitalised policy issuance premiums, savings in terms of manhours and software subscriptions.James Chen of CTBC Bank (Taiwan), lauded as CEO of the Year in Asia/Pacific, for his forward-thinking leadership to strengthen the bank’s digital technology services by investing over TWD7.67 billion to modernise its information core and transform its technology to better serve digital customers.Zuellig Pharma Holdings Pte. Ltd., Best in Future of Intelligence in Singapore, for leveraging data analytics to build a data superhighway that would connect all its current and future digital and data solutions. Anchored in the mission of making healthcare more accessible, it built three main pillars of service—commercial excellence, supply chain analytics, and business intelligence—to deliver actionable intelligence and insights. As a result of improved insights and services, the data analytics team has secured collaborative projects with over 30 principals and generated more than US$8 million in revenue in the last 18 months.

Entries will be judged against these critical capabilities of the Future Enterprise:

Future of TrustFuture of Industry EcosystemsFuture of OperationsFuture of WorkFuture of IntelligenceFuture of Digital InfrastructureFuture of ConnectednessFuture of Customer ExperienceFuture of Digital InnovationFuture Enterprise of the Year Award

To celebrate the innovative works of individuals and organisations, the Future Enterprise Awards also have these categories:

CEO of the YearCIO/CDO of the YearSpecial Award for Digital ResiliencySpecial Award for Sustainability

This year, to recognise outstanding organisations born in the digital-native era and smart cities projects, IDC will also hand out Special Awards for:

Digital Native BusinessSmart Cities – Best in Connected CitySmart Cities – Best in Digital PoliciesSmart Cities – Best in Citizen Wellbeing

The Future Enterprise Awards will also serve as a forum for sharing smart cities’ best practices to aid and accelerate development in APJ. As smart cities catalyse the digital transformation of urban ecosystems towards systemic environmental, financial, and social outcomes, they tap into emerging technologies and innovation to make cities more liveable, while offering new services and economic opportunities.

Nominations are now open for the awards across different regions—APJ, North America, Europe, and Middle East, Africa, and Turkey—with entries reviewed by a select panel of judges, composed of IDC worldwide analysts, industry thought leaders, and members of the academia. Each nomination is first evaluated by IDC’s country and regional analysts against a standard assessment framework, based on IDC’s Future Enterprise taxonomy. Winners of each country will then qualify for the regional competition.

Visit IDC Future Enterprise Awards to learn more. To submit a nomination, complete this form by 16 June, 2023.

Digital Leader Award, Enterprise

As applications and IT services advance, scaling and modernizing data centers and meeting increased performance and security requirements grows more and more challenging. While networking technology has evolved over the past decade to provide higher-performing leaf-spine topologies, the unfortunate reality is that associated security and services architectures have not kept pace.

To compensate, many organizations use a stateless data center fabric, bolting on network services and applying complex service chaining — an inefficient solution that delivers sub-par results.

Enterprise organizations that want to compete on the same level as the hyperscalers must shift away from legacy architectures and embrace the next-generation data center fabric. This stateful architecture provides integrated infrastructure services required to secure and scale applications while improving performance and manageability.

Aruba and Pensando Systems have come together to innovate a new category of switches that enables organizations to create hyperscale-like environments in their existing data centers. The Aruba Distributed Services Switch is the game-changing, next-generation data center fabric organizations need to overcome legacy limitations and resolve security and performance issues.

East-West traffic has outgrown current-generation switching technologies

As technologies like edge computing advance, the volume of data requiring processing has surged, prompting enterprise data centers to repurpose traditional firewalls to segment the network. This leads to a number of issues. For example, when a host system in one cabinet needs to interface with another, traffic must be routed to the services host. Unfortunately, this service-chaining approach creates a hair-pinning or trombone effect, resulting in choke points that bog down the network, operations, and performance.

To accommodate traffic and capacity increases, more firewalls must be added, making scaling complex and expensive. The reality is that traditional firewalls designed for North-South traffic fall short when it comes to enforcing East-West traffic policies. Furthermore, traffic from microservices-based applications may never leave a physical host, leading to security blind spots that leave an organization vulnerable to threats.

The next-generation data center fabric: Aruba CX 10000 Series Switch

As East-West traffic continues to expand, organizations must rethink how traffic gets handled in the data center. And that’s what the folks at Aruba and Pensando have done. The Aruba CX 10000 Series Switch combines Aruba data center L2/3 switching technology with the Pensando Elba DPU.

Putting the Pensando Elba DPU on the switch itself addresses the shortcomings of the traditional data center by eliminating the need for manual service chaining. As the industry’s only programmable DPU, the Pensando Elba creates a centralized policy enforcement point and delivers a distributed stateful firewall for East-West traffic. The result is the industry’s only solution of its kind, completely revolutionizing data center fabric and delivering the improved security, performance, and scalability of hyperscalers at a fraction of the cost.

Benefits of the Aruba CX 10000 include:

Zero-trust segmentationMicro- and macro-segmentationPervasive telemetryTraffic flow optimizationIncreased bandwidth and performanceImproved operational efficiency and scalabilityAccelerated provisioningSubstantially reduced capex and opex costs

Streamline and secure your data center with Aruba CX 10000

What’s great about the Aruba CX 10000 is that it allows organizations to use their existing data center architecture. It provides a single pane of glass to manage everything in one place, simplifying administration, security, provisioning, and scaling. Plus, this cost-effective solution costs half to a third of traditional data center switching while delivering far superior performance and ROI.

To learn more about transforming your data center architecture with next-generation data center fabric, contact the experts at GDT.

Data Center

By Patrick McFadin, DataStax

When the gap between enterprise software development and IT operations was bridged 15 or so years ago, building enterprise apps underwent a radical change. DevOps blew away manual and slow processes, and adopted the idea of infrastructure as code. This was a change that upped the ability to scale quickly and deliver reliable applications and services into production.

Building services internally has been the status quo for a long time, but in a cloud-native world, the lines behind cloud and on-prem have blurred. Third-party, cloud-based services, built on powerful open source software, are making it easier for developers to move faster. Their mandate is to focus on building with innovation and speed to compete in hyper-fast markets. For all application stakeholders—from the CIO to development teams—the path to simplicity, speed, and risk reduction often involves cloud-based services that make data scalable and available instantly.

These points of view aren’t far apart, and they exist at many established organizations that we work with. Yet they can be at odds with one another. In fact, we’ve often seen them work in ways that are counterproductive, to the extent that they slow down application development.

There might be compelling reasons for taking everything in-house but the end users are voting with execution. Here, we’ll look at the point of view of each group, and try to understand each one’s motivations. It’s not a zero-sum game and the real answer might be the right combination of the two.

Building services

Infrastructure engineers build the machine. They are the ones who stay up late, tend to the ailing infrastructure, and keep the lights on in the company. Adam Jacob (the co-founder and former CTO of Chef Software) famously said, “It’s the job of ops people to keep the shit-tastic code of developers out of your beautiful production infrastructure.” If you want to bring your project or product into the sacred grounds of what they’ve built, it has to be worthy. Infrastructure engineers will evaluate, test, and bestow their blessing only after they believe it themselves.

Tenets of the infrastructure engineer include the following:

Every deployment is different and requires qualified infrastructure engineers to ensure success.Applications are built on requirements, and infrastructure engineers deliver the right product to fit the criteria.The most cost-effective way to use the cloud is to do it ourselves.

What infrastructure engineers care about

Documentation and training

Having a clear understanding of every aspect of infrastructure is key to making it work well, so thorough and clear documentation is a must. It also has to be up to date; as new versions of products are released, documentation should bring everyone up to speed on what’s changed.

Version numbers

Products need to be tested and validated before going into production, so infrastructure teams track which versions are blessed for production; updates must be tested too. A critical part of testing is security, and we are generally behind the latest cutting edge, so we have the most stability and security.

Performance

Performance is critical, too. Our teams have to understand how the system works in various environments to plan adequate capacity. Systems with highly variable performance characteristics – or those that don’t meet the minimum – will never get deployed. New products must prove themselves in a trial by combat before even being considered.

Using services

Installing and running infrastructure is friction when building applications. Nothing is more important than the speed of putting an application into production. Operational teams love the nuances of how things work and take pride in running a well-oiled machine, but developers don’t have months to wait for that to happen. Winning against competitors means renting what’s needed, when it’s needed. Give us an API and a key, and let us run.   .

When it comes to infrastructure, developer tenets include:

Infrastructure has to conform to the app and not the other way aroundDon’t invent new infrastructure—just combine what’s availableConsume compute, network and storage like any other utility

Things service consumers care about

Does it fit what I need, and can I verify that quickly?

The app is the center of the developer’s universe, and what it needs is the requirement. If the service being considered meets the criteria, this needs to be verified quickly. If a lot of time is spent bending and twisting an app to make a service work, developers will just look for a different service that works better.

Cost

Developers want the lowest cost for what they get. Nothing so complicated that a spreadsheet is required. With services, developers don’t necessarily believe in “you get what you pay for,” with more expensive being better. Instead, they expect the cost to decrease over time from a service provider finding efficiencies. 

Availability

Developers expect a service to always work, and when it doesn’t, they get annoyed (like when the electricity goes out). Even if there is an SLA, most probably won’t read it—and will expect 100% uptime. When building my app, I assume there will be no downtime.

In the end, the app matters most

From working with a lot of organizations for whom applications are mission-critical, we’ve often seen that these two groups don’t work particularly well together—at times, their respective approaches can even be counterproductive. This friction can slow application production significantly, and even hamper an organization’s journey to the cloud.

This friction can manifest itself in several ways. For instance, a reliance on home-grown infrastructure can limit the ways that developers access the data required to build applications. This can limit innovation and introduce complexity to the development process.

And sometimes balancing cloud services with purpose-built solutions can actually create complexities and increase costs by watering down expected savings from moving to the cloud.

Application development and delivery is cost sensitive, but it requires speed and efficiency. Anything that gets in the way can lead to a dulled competitive edge, and even lost revenue.

Yet we also know of organizations that have intelligently combined the efforts of infrastructure engineers, who run your mission-critical apps today, and those who use services to build them. When the perspective and skills of each group is put to good use, flexibility, cost-efficiency, and speed can result.

Many successful organizations today are implementing a hybrid of the two (for now): some bespoke infrastructure mixed with services rented from a provider. Several organizations are leveraging Kubernetes in this quest for the grand unified theory of infrastructure. When describing a deployment model, there are blocks that create pods and service endpoints, with  other blocks that describe endpoints on a pay-per-use method. If you are using any cloud with Kubernetes, think storage and network services.

There are other important elements to an organization’s universe of services — whether they’re built or bought. Standard APIs are the de facto method of serving data to applications — and reduce time to market by simplifying development. SLAs — customer and internal alike — also clearly delineate scale and other performance expectations — so developers don’t have to.

Finally, I should point out that this is an immediate challenge in the world of open source data where I live. I work with Apache Cassandra®—software you can download and deploy in your own datacenter for free; free as in beer and free as in freedom. I also work on the K8ssandra project, which helps builders provide Cassandra as a service for their customers using Kubernetes. And DataStax, the company I work for, offers Astra DB built on Cassandra, which is a simple service for developers with no operations needed. I understand the various points of view—and I’m glad there’s a choice.

Learn more about DataStax here.

About Patrick McFadin:

DataStax

Patrick is the co-author of the O’Reilly book “Managing Cloud Native Data on Kubernetes.” He works at DataStax in developer relations and as a contributor to the Apache Cassandra project. Previously he has worked as an engineering and architecture lead for various internet companies.

IT Leadership

As one of the world’s largest biopharmaceutical companies, AstraZeneca pushes the boundaries of science to deliver life-changing medicines that create enduring value for patients and society. To accelerate growth through innovation, the company is expanding its use of data science and artificial intelligence (AI) across the business to improve patient outcomes. 

AstraZeneca has been on a multiyear journey to transform its scientific capabilities to enhance its understanding of disease, design next-generation therapeutics, pioneer new clinical approaches, and better predict clinical success. For example, as part of its efforts to unlock different human genomes, AstraZeneca is working toward the analysis of up to 2 million individual genomes by 2026. This initiative alone has generated an explosion in the quantity and complexity of data the company collects, stores, and analyzes for insights. 

“We needed a new approach to manage and analyze that data to accelerate the delivery of life-changing medicines for patients,” said Gurinder Kaur, Vice President of Operations IT at AstraZeneca. 

Gurinder Kaur, Vice President of Operations IT,  AstraZeneca

AstraZeneca

The new approach involved federating its vast and globally dispersed data repositories in the cloud with Amazon Web Services (AWS).  Unifying its data within a centralized architecture allows AstraZeneca’s researchers to easily tag, search, share, transform, analyze, and govern petabytes of information at a scale unthinkable a decade ago. 

What began as an initiative focused on R&D now has extended to the company’s three other major business units: Commercial, Operations, and Clinical, according to Kaur. The goal, she explained, is to knock down data silos between those groups, using multiple data lakes supported by strong security and governance, to drive positive impact across the supply chain, manufacturing, and the clinical trials of new drugs. 

“Our ambition is finding a way to take these amazing capabilities we’ve built in different areas and connect them, using AI and machine learning, to drive huge scale across the ecosystem,” Kaur said. “Beyond R&D, we see value in extracting insights from data sources to improve patient outcomes and deliver personalized medicines.”

The cloud-based platform allows AstraZeneca scientists to move from ideas to insights faster, accelerating both drug discovery and clinical trials, to improve patient outcomes.

Moving from ideas to insights faster

AWS’s expertise with scaling cloud services was invaluable in helping AstraZeneca build an end-to-end machine learning platform, called AI Bench, to make it easier to apply machine learning across the enterprise. “AI Bench is a set of automated tools and guardrails that help us spin up the right environments in an automated fashion, so our data scientists can quickly begin working in a safe, secure, environment while ensuring regulatory compliance,” said Brian Dummann, AstraZeneca’s Vice President of Insights & Technology Excellence. “Before AI Bench, every data science project was like a separate IT project. We would spend weeks getting the right environment in place.”

Brian Dummann, Vice President of Insights & Technology Excellence, AstraZeneca

AstraZeneca

Built on Amazon SageMaker, a service to build, train, and deploy ML models, AI Bench has accelerated the pace of innovation and reduced the barrier of entry for machine learning across AstraZeneca.  

“We have reduced the lead time to start a machine learning project from months to hours,” Kaur said. “This allows for engineers and data scientists to go from idea to insight quickly, delivering meaningful impact. Modern technology solutions provide our data science teams with fingertip access to synchronized information and data sets, allowing rapid re-use of models to ultimately accelerate outcomes and delivery for our patients.”

Accelerating drug discovery and clinical trials

More quickly moving from ideas to insights has aided new drug development and the clinical trials used for testing new products. AstraZeneca’s ability to quickly spin up new analytics capabilities using AI Bench was put to the ultimate test in early 2020 as the global pandemic took hold. 

“When Covid first appeared, we knew we had to step up quickly with our pandemic response,” Dummann said. “We were able to establish validated environments within 24 hours to begin working on evaluating Covid. This would have taken weeks or even months without the work we had already done to build out AI Bench.”

AstraZeneca’s increased investment in the cloud and AI capabilities offers the potential for a similar impact on clinical trials.  “Clinical trials currently account for 60% of the cost and 70% of the time it takes to bring a potential new drug to market[1],” said Kaur. “AI and machine learning are helping us optimize that process and reduce the time it takes. The quicker we can complete clinical trials, the quicker we can get new medicines to patients.”

Four ways to improve data-driven business transformation 

Kaur and Dummann offered four pieces of advice to other IT leaders looking to get more value from their data transformation activities: 

Start small, think big, and scale fast. “You always need to have the big picture and vision in mind, but you don’t have to develop that picture right out of the gate,” Kaur said. Instead, focus on getting solutions out quickly, testing and improving them, and then scale them out across the company. The ability to scale also means promoting the re-use of data products where possible. “We want to maximize our investment in AI,” said Kaur. “We don’t want to keep reinventing the wheel, and we want our data scientists to be able to re-use AI assets across the enterprise.” AstraZeneca’s data scientists have launched more than 100 AI projects, and the number continues to grow.

Build internal expertise and understanding. Data-driven transformation is as much about people and process as it is about data and technology. To succeed, you need to get people to believe in the value of the transformation and show them a clear path to get there. “Attracting and retaining some of the best data scientists in the world has been critical to unlocking the value of data,” said Dummann. “So a big part for us is focusing on improving the experience of the data scientists. We’re keen to democratize data projects so that data scientists can get on with their daily tasks without reliance on IT. We don’t want to make them wait for weeks or days to get their work done.”

Modernize your approach to data and technology. “Data is an asset, and it needs to be treated as such,” said Dummann. It’s critical to ensure the integrity of the data for AI and machine learning models to work effectively. For the broader technology architecture, Dummann suggests moving away from best-of-breed point solutions. Instead, “invest in a few big, critical capabilities to really get the scale and speed you need.”

Don’t be afraid to fail. “There are multiple ways to solve a problem,” said Kaur. “Adjust as you go.”

Through its commitment to the cloud, data, AI, and machine learning, AstraZeneca is seeing its pace of innovation increase – and is eager to see where the journey leads. 

“Our data science community is moving faster than ever before, harnessing the power of data and AI to help discover new drugs, accelerate clinical studies and regulatory approvals, and maximize impact on patient lives,” says Dummann. “It’s an exciting time to be at AstraZeneca!” 

Learn more about ways to put your data to work on the most scalable, trusted, and secure cloud. 

[1] Clinical Development Success Rates, 2006-2015. BIO, BioMed tracker, Amplion, 2016

Artificial Intelligence

Every organization pursuing digital transformation needs to optimize IT from edge to cloud to move faster and speed time to innovation. But the devil’s in the details. Each proposed IT infrastructure purchase presents decision-makers with difficult questions. What’s the right infrastructure configuration to meet our service level agreements (SLAs)? Where should we modernize — on-premises or in the cloud? And how do we demonstrate ROI in order to proceed?

There are no easy, straightforward answers. Every organization is at a different stage in the transformation journey, and each one faces unique challenges. The conventional approach to IT purchasing decisions has been overwhelmingly manual: looking through spreadsheets, applying heuristics, and trying to understand all the complex dependencies of workloads on underlying infrastructure.

Partners and sellers are similarly constrained. They must provide a unique solution for each customer with little to no visibility into a prospect’s IT environment. This has created an IT infrastructure planning and buying process that is inaccurate, time-consuming, wasteful, and inherently risky from the perspective of meeting SLAs.

Smarter solutions make for smarter IT decisions

It’s time to discard legacy processes and reinvent IT procurement with a new approach that leverages the power of data-driven insights. For IT decision makers and their partners and sellers, a modern approach involves three essential steps to optimize procurement — and accelerate digital transformation:

1. Understand your VM needs

Before investing in infrastructure modernization, it’s critical to get a handle on your current workloads. After all, you must have a clear understanding of what you already have before deciding on what you need. To reach that understanding, enterprises, partners, and sellers should be able to collect and analyze fine-grained resource utilization data per virtual machine (VM) — and then leverage those insights to precisely determine the resources each VM needs to perform its job.

Why is this so important? VM admins often select from a menu of different sized VM templates when they provision a workload. They typically do so without access to data — which can lead to slowed performance due to under-provisioning, or oversubscribed VMs if they choose an oversized template. It’s essential to right-size your infrastructure plan before proceeding.

2. Model and price infrastructure with accuracy

Any infrastructure purchase requires a budget, or at least an understanding of how much money you intend to spend. To build that budget, an ideal IT procurement solution provides an overview of your inventory, including aggregate information on storage, compute, virtual resource allocation, and configuration details. It would also provide a simulator for on-premises IT that includes the ability to input your actual costs of storage, hosts, and memory. Bonus points for the ability to customize your estimate with depreciation term, as well as options for third-party licensing and hypervisor and environmental costs.

Taken together, these capabilities will tell you how much money you’re spending to meet your needs — and help you to avoid overpaying for infrastructure.

3. Optimize workloads across public and private clouds

Many IT decision makers wonder about the true cost of running particular applications in the public cloud versus keeping them on-premises. Public cloud costs often start out attractively low but can increase precipitously as usage and data volumes grow. As a result, it’s vital to have a clear understanding of cost before deciding where workloads will live. A complete cost estimate involves identifying the ideal configurations for compute, memory, storage, and network when moving apps and data to the cloud.

To do this, your organization and your partners and sellers need a procurement solution that can map their entire infrastructure against current pricing and configuration options from leading cloud providers. This enables you to make quick, easy, data-driven decisions about the costs of running applications in the cloud based on the actual resource needs of your VMs.

And, since you’ve already right sized your infrastructure (step 1), you won’t have to worry about moving idle resources to the cloud and paying for capacity you don’t need.

HPE leads the way in modern IT procurement

HPE has transformed the IT purchasing experience with a simple procurement solution delivered as a service: HPE CloudPhysics. Part of the HPE GreenLake edge-to-cloud platform, HPE CloudPhysics continuously monitors and analyzes your IT infrastructure, models that infrastructure as a virtual environment, and provides cost estimates of cloud migrations. Since it’s SaaS, there’s no hardware or software to deal with — and no future maintenance.

HPE CloudPhysics is powered by some of the most granular data capture in the industry, with over 200 metrics for VMs, hosts, data stores, and networks. With insights and visibility from HPE CloudPhysics, you and your sellers and partners can seamlessly collaborate to right-size infrastructure, optimize application workload placement, and lower costs. Installation takes just minutes, with insights generated in as little as 15 minutes.

Across industries, HPE CloudPhysics has already collected more than 200 trillion data samples from more than one million VM instances worldwide. With well over 4,500 infrastructure assessments completed, HPE CloudPhysics already has a proven record of significantly increasing the ROI of infrastructure investments.

This is the kind of game-changing solution you’re going to need to transform your planning and purchasing experience — and power your digital transformation.

____________________________________

About Jenna Colleran

HPE

Jenna Colleran is a Worldwide Product Marketing Manager at HPE. With over six years in the storage industry, Jenna has worked in primary storage and cloud storage, most recently in cloud data and infrastructure services. She holds a Bachelor of Arts degree from the University of Connecticut.

Cloud Management, HPE, IT Leadership

The software supply chain is, as most of us know by now, both a blessing and a curse.

It’s an amazing, labyrinthine, complex (some would call it messy) network of components that, when it works as designed and intended, delivers the magical conveniences and advantages of modern life: Information and connections from around the world plus unlimited music, videos, and other entertainment, all in our pockets. Vehicles with lane assist and accident avoidance.

Home security systems. Smart traffic systems. And on and on.

But when one or more of those components has defects that can be exploited by criminals, it can be risky and dangerous. It puts the entire chain in jeopardy. You know — the weakest link syndrome. Software vulnerabilities can be exploited to disrupt the distribution of fuel or food. It can be leveraged to steal identities, empty bank accounts, loot intellectual property, spy on a nation, and even attack a nation.

So the security of every link in the software supply chain is important — important enough to have made it into a portion of President Joe Biden’s May 2021 executive order, “Improving the Nation’s Cybersecurity” (also known as EO 14028).

It’s also important enough to have been one of the primary topics of discussion at The 2022 RSA conference in San Francisco. Among dozens of presentations on the topic at the conference was “Software supply chain: The challenges, risks, and strategies for success” by Tim Mackey, principal security strategist within the Synopsys Cybersecurity Research Center (CyRC).

Challenges and risks

The challenges and risks are abundant. For starters, too many organizations don’t always vet the software components they buy or pull from the internet. Mackey noted that while some companies do a thorough background check on vendors before they buy — covering everything from the executive team, financials, ethics, product quality, and other factors to generate a vendor risk-assessment score — that isn’t the norm.

“The rest of the world is coming through, effectively, an unmanaged procurement process,” he said. “In fact, developers love that they can just download anything from the internet and bring it into their code.”

While there may be some regulatory or compliance requirements on those developers, “they typically aren’t there from the security perspective,” Mackey said. “So once you’ve decided that, say, an Apache license is an appropriate thing to use within an organization, whether there are any unpatched CVEs [Common Vulnerabilities and Exposures] associated with anything with an Apache license, that’s somebody else’s problem. There’s a lot of things that fall into the category of somebody else’s problem.”

Then there’s the fact that the large majority of the software in use today — nearly 80% — is open source, as documented by the annual “Open Source Security and Risk Analysis” (OSSRA) report by the Synopsys CyRC.

Open source software is no more or less secure than commercial or proprietary software and is hugely popular for good reasons — it’s usually free and can be customized to do whatever a user wants, within certain licensing restrictions.

But, as Mackey noted, open source software is generally made by volunteer communities — sometimes very small communities — and those involved may eventually lose interest or be unable to maintain a project. That means if vulnerabilities get discovered, they won’t necessarily get fixed.

And even when patches are created to fix vulnerabilities, they don’t get “pushed” to users. Users must “pull” them from a repository. So if they don’t know they’re using a vulnerable component in their software supply chain, they won’t know they need to pull in a patch, leaving them exposed. The infamous Log4Shell group of vulnerabilities in the open source Apache logging library Log4j is one of the most recent examples of that.

Keeping track isn’t enough

To manage that risk requires some serious effort. Simply keeping track of the components in a software product can get very complicated very quickly. Mackey told of a simple app he created that had eight declared “dependencies” — components necessary to make the app do what the developer wants it to do. But one of those eight had 15 dependencies of its own. And one of those 15 had another 30. By the time he got several levels deep, there were 133 — for just one relatively simple app.

Also, within those 133 dependencies were “multiple instances of code that had explicit end-of-life statements associated with them,” he said. That means it was no longer going to be maintained or updated.

And simply keeping track of components is not enough. There are other questions organizations should be asking themselves, according to Mackey. They include: Do you have secure development environments? Are you able to bring your supply chain back to integrity? Do you regularly test for vulnerabilities and remediate them?

“This is very detailed stuff,” he said, adding still more questions. Do you understand your code provenance and what the controls are? Are you providing a software Bill of Materials (SBOM) for every single product you’re creating? “I can all but guarantee that the majority of people on this [conference] show floor are not doing that today,” he said.

But if organizations want to sell software products to the U.S. government, these are things they need to start doing. “The contract clauses for the U.S. government are in the process of being rewritten,” he said. “That means any of you who are producing software that is going to be consumed by the government need to pay attention to this. And it’s a moving target — you may not be able to sell to the U.S. government the way that you’re used to doing it.”

Even SBOMs, while useful and necessary — and a hot topic in software supply chain security — are not enough, Mackey said.

Coordinated efforts

“Supply chain risk management (SCRM) is really about a set of coordinated efforts within an organization to identify, monitor, and detect what’s going on. And it includes the software you create as well as acquire, because even though it might be free, it still needs to go through the same process,” he said.

Among those coordinated efforts is the need to deal with code components such as libraries within the supply chain that are deprecated — no longer being maintained. Mackey said developers who aren’t aware of that will frequently send “pull requests” asking when the next update on a library is coming.

And if there is a reply at all, it’s that the component is end-of-life, been end-of-life, and that the only thing to do is move to another library.

“But what if everything depends on it?” he said. “This is a perfect example of the types of problems we’re going to run into as we start managing software supply chains.”

Another problem is that developers don’t even know about some dependencies they’re pulling into a software project, and whether those might have vulnerabilities.

“The OSSRA report found that the top framework with vulnerabilities last year was jQuery [a JavaScript library]. Nobody decides to use JQuery, it comes along for the ride,” he said, adding that that is true of others as well, including Lodash (a JavaScript library) and Spring Framework (an application framework and inversion of control container for the Java platform). “They all come along for the ride,” he said. “They’re not part of any monitoring. They’re not getting patched because people simply don’t know about them.”

Building trust

There are multiple other necessary activities within SCRM that, collectively, are intended to make it much more likely that a software product can be trusted. Many of them are contained in the guidance on software supply chain security issued in early May by the National Institute of Standards and Technology in response to the Biden EO.

Mackey said this means that organizations will need their “procurement teams to be working with the government’s team to define what the security requirements are. Those requirements are then going to inform what the IT team is going to do — what a secure deployment means. So when somebody buys something you have that information going into procurement for validation.”

“A provider needs to be able to explain what their SBOM is and where they got their code because that’s where the patches need to come from,” he said.

Finally, Mackey said the biggest threat is the tendency to assume that if something is secure at one point in time, it will always be secure.

“We love to put check boxes beside things — move them to the done column and leave them there,” he said. “The biggest threat we have is that someone’s going to exploit the fact that we have a check mark on something that is in fact a dynamic something — not a static something that deserves a check mark. That’s the real world. It’s messy — really messy.”

How prepared are software vendors to implement the security measures that will eventually be required of them? Mackey said he has seen reports showing that for some of those measures, the percentage is as high as 44%. “But around 18% is more typical,” he said. “People are getting a little bit of the message, but we’re not quite there yet.”

So for those who want to sell to the government, it’s time to up their SCRM game. “The clock is ticking,” Mackey said.

Click here to find more Synopsys content about securing your software supply chain.

Security