CIOs and IT leaders call it the most disruptive technology yet, and now it’s moving rapidly into the mainstream. Artificial intelligence (AI), an increasingly crucial piece of the technology landscape, has arrived. More than 91 percent of businesses surveyed have ongoing — and increasing — investments in artificial intelligence.

Deploying AI workloads at speed and scale, however, requires software and hardware working in tandem across data centers and edge locations. Foundational IT infrastructure, such as GPU- and CPU-based processors, must provide big capacity and performance leaps to efficiently run AI. Without higher performance levels, AI workloads could take months and years to run. With it, organizations can accelerate AI advancements. 

Dell Technologies’ recent developments in hardware and software solutions mirror AI software capabilities to do just that—advance AI. More specifically, next-gen offerings from Dell Technologies provide 8-10x performance improvements according to MLCommons ®MLPerf™ benchmarks. The upgraded Dell Technologies solution portfolio includes a range of GPU-optimized servers for AI training and CPU-powered servers for enterprise-wide AI inferencing, both of which are essential, co-existing elements of AI deployment.

MLCommons MLPerf Results

For benchmarking, the MLCommons updated version 3.0, MLPerf Inference was used; the latest results are shown here. Benchmarks include categories such as image classification, object detection, natural language processing, speech recognition, recommender systems and medical image segmentation.

While the inference benchmark rules did not change significantly, Dell Technologies expanded its submission with the new generation of Dell PowerEdge servers, including new PowerEdge XE9680, XR7620, and XR5610 servers and new accelerators from its partners. Submissions were made with VMware running on NVIDIA AI Enterprise software and NVIDIA accelerators as well as Intel-based CPU-only results.

The results for Dell Technologies’ next-gen processors are extraordinary for the highly demanding use cases of AI training, generative AI model training and tuning and AI inferencing. Compared to previous generations of hardware, the results show a significant uptick in performance:

GPU-optimized servers produced an 8-10x improvement in performance.

CPU-powered servers generated a 6-8x improvement in performance.

More detailed results can be seen here

AI in Action

AI data center and edge deployments mandate a highly interdependent ecosystem of up-level software and hardware capabilities, including a mix of GPU- and CPU-based processors. Each industry and organization can tailor infrastructure based on unique needs, preferences and requirements.

Consider, for example, a pharmaceutical company using AI modeling and simulation for drug discovery. Modern developments are based on a chemist finding highly active molecules that also test negative for neurotoxicity. There are trillions of compounds to consider and evaluate. Each search takes almost two months and thousands of dollars, limiting the number of searches and tests that can be conducted. Using AI, simulations can examine many more molecules much faster and cheaper, opening a new world of possibilities. To accelerate drug discovery (there are thousands of diseases and only hundreds of cures), pharmaceutical companies need powerful processors to handle large and diverse data sets efficiently and effectively.

Retailers typically use AI differently than pharmaceutical companies. Retail use cases often revolve around video imagery used to enhance security, bolster intrusion detection and support self-check-out capabilities. To boost newfound capabilities in these areas, retailers need more powerful GPU-optimized processors to handle image-based data streams.

Advancing AI

Emerging generative AI use cases, such as digital assistants and co-pilots for software development, are appearing as the next frontier of AI. That’s why at Dell Technologies, innovation never rests.

When it comes to technology infrastructure, Dell Technologies and its partners are constantly innovating to reach new performance levels and help redefine what is possible. The exponential performance increase in NVIDIA GPU-optimized servers and the infusion of AI Inferencing in Intel’s® Xeon®-based servers are creating the required AI foundation. With these results, Dell Technologies can help organizations fuel AI transformations precisely and efficiently, including new AI training and inferencing software, generative AI models, AI DevOps tools and AI applications.

***

Dell Technologies. To help organizations move forward, Dell Technologies is powering the AI journey, including enterprise generative AI. With best-in-class IT infrastructure and solutions to run AI workloads and advisory and support services that roadmap AI initiatives, Dell is enabling organizations to boost their digital transformation and accelerate intelligent outcomes. 

Intel. The compute required for AI models has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today. Intel’s commitment to the democratization of AI and sustainability will enable broader access to the benefits of AI technology, including generative AI, via an open ecosystem. Intel’s AI hardware accelerators, including new built-in accelerators, provide performance and performance per watt gains to address the escalating performance, price and sustainability needs of AI.

Artificial Intelligence

Moving SAP workloads to the cloud promises to be transformational, but it’s not for the faint of heart. Goals for an ERP modernization initiative often range from lowering costs through infrastructure savings to adding cloud-based capabilities to ERP tasks with minimal disruption to day-to-day business. Achieving these objectives takes perceptive analysis, meticulous planning, and skillful execution.  

“There are many factors to consider, including application complexity, legacy application requirements, data location, and compliance,” says Dilip Mishra, SAP delivery leader for the Cloud Migration and Modernization practice at Kyndryl. Teams must determine which workloads move to the cloud and which will remain on-premises. What’s more, adds Mishra, many organizations are likely to encounter a “long-tail” of interdependencies between applications and infrastructure that requires special expertise to unravel.  

Perhaps most important, the undertaking will not succeed without cooperation between IT and business leaders. “To overcome the perception that from a business perspective, the migration might look like a lot of effort for a little return, IT leaders must communicate the business case for moving each workload,” Mishra says. CIOs and their teams should also consider providing a systematic framework for delivering and measuring the value to the business now and in the future, covering technology, operations, and financials.  

In short, IT leaders can expect curves in the road that only seasoned experts can navigate without mishap. To that end, Kyndryl and AWS have established a partnership with an extensive track record in rehosting and re-platforming SAP workloads on AWS cloud services.  

Schneider Electric’s story  

Schneider Electric’s journey to the cloud began by moving its SAP applications from an outsourcer to a Kyndryl data center. After stabilizing the environment and integrating the operations of numerous acquired companies, Kyndryl optimized the applications and infrastructure while planning the transition to AWS. With the goal of maintaining continuous business operations, Kyndryl mapped out a migration to AWS that accorded the Kyndryl data center an important role in a hybrid cloud architecture.    

“A hybrid environment provides the flexibility of workload placement based on business requirements and provides a smoother transition to cloud because the customer has time to plan and re-engineer without going through a big-bang cutover,” says Naresh Nayar, Kyndryl distinguished engineer.   

Schneider’s internal team designed and built the AWS “landing zone,” a secure environment with strict rules about firewalls, connectivity, and security groups. Kyndryl architected the new operating environment using its framework for cloud operations and provided specifications that AWS and Schneider technical teams used to provision the new infrastructure in the landing zone.  

Schneider Electric’s move shows that a non-disruptive cloud transition is possible with careful planning and a deep portfolio of skills. For such enterprise migrations, experience matters: Currently, more than 5,000 SAP customers run on AWS. The AWS portfolio includes AWS migration Hub, AWS Application Discovery Service, AWS Application Migration Service, AWS Service Catalogue, and AWS Database Migration Service. For its part, Kyndryl brings to bear more than three decades and 90,000 skilled practitioners providing IT services at the highest level.  

Learn more about how Kyndryl and AWS are innovating to achieve transformational business outcomes for customers.  

ERP Systems

Oracle on Thursday reported third quarter total revenue of $12.4 billion, up 18% year-on-year, boosted by the demand for AI workloads in Oracle Cloud Infrastructure (OCI) and Cerner’s contribution to the topline.

“So, we have a lot of business, a lot of new AI companies coming to Oracle because we’re the only ones who can run their workloads. And by the way — and we are cheaper. But so we’re faster and we’re cheaper,” Oracle Chairman Larry Ellison said during an earnings call with analysts.

Top Oracle executives claim that the second generation of OCI has a superior architecture and network capability that enables it to run AI workloads faster.

Oracle’s Gen 2 Cloud, according to Ellison, is different from other products on offer from rival hyperscalers as it uses a non-blocking Remote Direct Access Memory (RDMA) network that allows two networked computers to share information without using any processing power.

“What this means is if you’re running a large group of Nvidia GPUs in a cluster doing a large AI problem at Oracle, we can build these AI clusters dynamically. Our standard network supports the large clustering of GPUs and allows them to communicate very quickly. So, we can create these groups of GPUs. We can marshal them together. The other guys can’t do that,” Ellison said during the earnings call, according to a transcript from The Motley Fool.

In contrast, other infrastructure-as-a-service (IaaS) platforms on offer from other hyperscalers, according to Ellison, are physically building new hardware to be able to support such a similar AI cluster.

Last year in October, Oracle and Nvidia extended their partnership to help speed customer adoption of artificial intelligence (AI) services.  

Continued double-digit growth across services

In line with the last sequential quarter, Oracle continued to see double-digit growth across its cloud services, Fusion applications, and Cerner.

For the third quarter, the company reported total cloud revenue (IaaS and SaaS combined) of $4.1 billion, up 45% year-on-year. Its cloud infrastructure (IaaS) revenue grew 55% to $1.2 billion while cloud application (SaaS) revenue increased 42% to $2.9 billion.

Revenue for Fusion Cloud ERP rose 25% to $0.7 billion, while the company’s NetSuite division’s revenue grew 23% to $0.7 billion.

Although Cerner’s contribution provided a boost to the company’s total revenue, it reported a revenue of $1.5 billion — the same number it had reported in the last sequential quarter.

Slump in profits

Despite seeing an increase in revenue, the company’s profit declined 18% year-on-year due to rising operating expenses.

Oracle reported a net income of $1.89 billion for the third quarter compared with $2.31 billion for the corresponding period last year.

Operating expenses for the company rose to $9.13 billion for the quarter ended February, compared with $6.69 billion for the same period last year.

AI workloads will be the next growth driver

Chairman Larry Ellison and CEO Safra Catz both believe that the next spurt of growth for the company will come from AI-based workloads as the world embraces generative models such as ChatGPT.

“There’s actually more demand for AI processing than there is available capacity,” Ellison said during the earnings call when asked about opportunities in generative AI.

Oracle, according to Ellison, is all set to grab the AI workload opportunity and plans to increase capacity to meet growing demand.

Artificial Intelligence, Oracle

Enterprises driving toward data-first modernization need to determine the optimal multicloud strategy, starting with which applications and data are best suited to migrate to cloud and what should remain in the core and at the edge.

A hybrid approach is clearly established as the optimal operating model of choice. A Flexera report found the shift to hybrid infrastructure supported by overwhelming numbers of survey respondents, with 89% of them opting for a multicloud strategy and 80% taking a hybrid approach that combines use of public as well as private clouds.

The shift toward hybrid IT has clear upsides, enabling organizations to choose the right solution for each task and workload, depending on criteria such as performance, security, compliance, and cost, among other factors. The challenge is that CIOs must apply a rigorous process and holistic assessment to determine the optimal data modernization strategy, given that there is no one-size-fits-all answer.

Many organizations set out on the modernization journey guided by the premise that cloud-first or cloud-only is the ultimate destination, only to find that the path is not appropriate for all data and workloads. “Directionally correct CIOs and the C-suite looked at the public cloud and liked the operating model: the pay-as-you-go, predefined services, the automation and orchestration, and the partner ecosystem all available to you,” says Rocco Lavista, worldwide vice president for HPE GreenLake sales and go-to-market. “Many tried to move their whole estate into public cloud, and what they found is that that doesn’t work for everything. It’s less about what application and data should go on public cloud and more about a continuum from the edge to core [in colocated or private data centers] to public cloud.”

Close to the Edge

There are several reasons why certain data and workloads need to remain at the edge, as opposed to transitioning to public cloud. Data gravity is perhaps the most significant arbiter of where to deploy workloads, particularly when there is a need to analyze massive amounts of data quickly — for example, with X-ray or MRI machines in a hospital setting, for quality assurance data from a manufacturing line, and even with data collected at point-of-sale systems in a retail setting. 

Artificial intelligence (AI) projects are another useful example. “Where I’ve seen AI projects fail is in trying to bring the massive amounts of data from where it’s created to the training model [in some public cloud] and get timely insights, versus taking the model and bringing it closer to where the data is created,” Lavista explains. “Here, there is a synergistic need between what is happening at the edge and the processing power required in real time to facilitate your business objectives.” 

Application entanglement presents another barrier keeping organizations from migrating some applications and data to cloud. Some legacy applications have been architected in a way that doesn’t allow pieces of functionality and data to be migrated to cloud easily; in other cases, making a wholesale migration is out of the question, for reasons related to cost and complexity. There are also workloads that don’t make economic sense to refactor from operating in a fixed environment to a variable cost-based architecture and others with specific regulatory or industry obligations tied to data sovereignty or privacy that prevent a holistic migration strategy in embrace of public cloud.

The HPE GreenLake Advantage

Given the importance of the edge in the data modernization strategy, HPE seeks to remove any uncertainty regarding where to deploy applications and data. The HPE GreenLake edge-to-cloud platform brings the desired cloud-based operating model and platform experience, but with consistent and secure data governance practices, starting at the edge and running all the way to public cloud. This can be applied across any industry — such as retail, banking, manufacturing, or healthcare — and regardless of where the workload resides.

HPE GreenLake with the managed service offering is inclusive of all public clouds, ensuring a consistent experience whether data and applications are deployed on AWS, Microsoft Azure, or Google Cloud Platform as part of a hybrid mix that encompasses cloud in concert with on-premises infrastructure in an internal data center or colocation facility.

“IT teams want a unified solution they can use to manage all technology needs, from infrastructure as a service (IaaS) to platform as a service (PaaS) and container as a service (CaaS), that drive automation and orchestration that are not snowflakes,” says Lavista. “HPE GreenLake provides that standard operating model from edge to core and all the way through to the public cloud.”

By aligning with HPE GreenLake solutions, IT organizations also free themselves of the day-to-day operations of running infrastructure to focus on delivering core capabilities for business users as well as DevOps teams. The HPE GreenLake team works with organizations to assess which workloads are a better fit for cloud or edge, by evaluating a variety of factors, including technical complexity, system dependencies, service-level agreement (SLA) requirements, and latency demands. For example, a quality control system on a manufacturing line might be better suited for an edge solution, due to the need to analyze data in volume and in near real time. But an AI application that could benefit from a facial recognition service might be better served by public cloud for such service, given the broad ecosystem of available third-party services that eliminate the need to re-create the wheel for every innovation. 

To ensure top performance, Lavista counsels companies to fully understand their core business objectives and to be pragmatic about their cloud migration goals so they avoid the trap of moving data and workloads simply because it’s the latest technology trend. “Understand your options based on where you are coming from,” he says. “If what you are looking for is to optimize the IT operating model, you can still get that without moving applications and data.”

For more information, visit https://www.hpe.com/us/en/solutions/edge.html

Hybrid Cloud

Artificial Intelligence (AI) is fast becoming the cornerstone of business analytics, allowing companies to generate value from the ever-growing datasets generated by today’s business processes. At the same time, the sheer volume and velocity of data demand high-performance computing (HPC) to provide the power needed to effectively train AIs, do AI inferencing, and run analytics. According to Hyperion Research, HPC-enabled AI, growing at more than 30 percent, is projected to be a $3.5 billion market in 2024.

This rising confluence of HPC and AI is being driven by businesses and organisations honing their competitive edge in the global marketplace as digital transformation is accelerated and brought to the next level through IT transformation processes.

“We’re seeing HPC-enabled AI on the rise because it extracts and refines data quicker and more accurately. This naturally leads to faster and richer insights, in turn enabling better business outcomes and facilitates new breakthroughs and better differentiation in products and services while driving greater cost savings,” said Mike Yang, President at Quanta Cloud Technology, better known as QCT.

While HPC and AI are expected to benefit most industries, the fields of healthcare, manufacturing and higher education and research (HER) and Finance stand to gain perhaps the most due to the high-intensity nature of the workloads involved.

Application of HPC-enabled AI in the fields of next-generation sequencing, medical imaging and molecular dynamics have the potential to speed drug discoveries and improve patient care procedures and outcomes. In manufacturing, finite element analysis, computer vision, electronic design automation and computer-aided design are facilitated by AI and HPC to speed product development, while analysis generated from Internet-of-Things (IoT) data can streamline supply chains, enhance predictive maintenance regimes and automate manufacturing processes. HER utilises technology to explore fields such as dynamic structure analysis, weather prediction, fluid dynamics and quantum chemistry in an ongoing quest to solve global problems like climate change and achieve breakthroughs and deeper exploration through cosmology and astrophysics.    

Optimising HPC and AI Workloads

The AI and Machine Learning (ML) algorithms underlying these business and scientific advances have become significantly more complex, delivering faster yet more accurate results, but at the cost of significantly more computational power. The key challenge now facing organisations is building HPC, AI, HPC-enabled AI, and HPC-AI converged workloads—while shortening project implementation time. Ultimately, this will allow researchers, engineers, and scientists to concentrate fully on their research.

IT support would also need to actively manage their HPC and AI infrastructure, leveraging the right profiling tool for optimisation of HPC and AI workloads. Optimised HPC/AI infrastructure should deliver the right resources at the right time for researchers and developers to accelerate computational processes.

In addition, understanding workload demands and optimising performance helps IT avoid additional workload and extra labour for finetuning, significantly reducing the total cost of ownership (TCO). To optimise HPC and AI workloads effectively and quickly, organisations can consider the following steps:

Identify key workload applications and data used by the customer, as well as the customer’s expectations and pain pointsDesign infrastructure and building the cluster, ensuring that hardware and software stack can support the workloadsContinue the process of always adjusting and finetuning

QCT leverages Intel’s profiling tool Intel Granulate gProfiler to reveal the behaviour of the workload before tapping its deep own deep expertise to analyse the behaviour and design a fine-tuning plan to help with optimisation. Through this process, organisations can ensure rapid deployment, simplified management, and optimised integrations—all at cost savings.

AI continues to offer transformational solutions for businesses and organisations, but the growing complexity of datasets and algorithms is driving greater demand on HPC to enable these power-intensive workloads. Workload optimisation effectively enhances the process and, at the heart of it, enables professionals in their fields to focus on their research to drive industry breakthroughs and accelerate innovation.

To discover how workload profiling can transform your business or organisation, click here.

Artificial Intelligence, Digital Transformation, High-Performance Computing

Enterprises driving toward data-first modernization need to determine the optimal multicloud strategy, starting with which applications and data are best suited to migrate to cloud and what should remain in the core and at the edge.

A hybrid approach is clearly established as the optimal operating model of choice. A Flexera report found the shift to hybrid infrastructure supported by overwhelming numbers of survey respondents, with 89% of them opting for a multicloud strategy and 80% taking a hybrid approach that combines use of public as well as private clouds.

The shift toward hybrid IT has clear upsides, enabling organizations to choose the right solution for each task and workload, depending on criteria such as performance, security, compliance, and cost, among other factors. The challenge is that CIOs must apply a rigorous process and holistic assessment to determine the optimal data modernization strategy, given that there is no one-size-fits-all answer.

Many organizations set out on the modernization journey guided by the premise that cloud-first or cloud-only is the ultimate destination, only to find that the path is not appropriate for all data and workloads. “Directionally correct CIOs and the C-suite looked at the public cloud and liked the operating model: the pay-as-you-go, predefined services, the automation and orchestration, and the partner ecosystem all available to you,” says Rocco Lavista, worldwide vice president for HPE GreenLake sales and go-to-market. “Many tried to move their whole estate into public cloud, and what they found is that that doesn’t work for everything. It’s less about what application and data should go on public cloud and more about a continuum from the edge to core [in colocated or private data centers] to public cloud.”

Close to the Edge

There are several reasons why certain data and workloads need to remain at the edge, as opposed to transitioning to public cloud. Data gravity is perhaps the most significant arbiter of where to deploy workloads, particularly when there is a need to analyze massive amounts of data quickly — for example, with X-ray or MRI machines in a hospital setting, for quality assurance data from a manufacturing line, and even with data collected at point-of-sale systems in a retail setting.

Artificial intelligence (AI) projects are another useful example. “Where I’ve seen AI projects fail is in trying to bring the massive amounts of data from where it’s created to the training model [in some public cloud] and get timely insights, versus taking the model and bringing it closer to where the data is created,” Lavista explains. “Here, there is a synergistic need between what is happening at the edge and the processing power required in real time to facilitate your business objectives.”

Application entanglement presents another barrier keeping organizations from migrating some applications and data to cloud. Some legacy applications have been architected in a way that doesn’t allow pieces of functionality and data to be migrated to cloud easily; in other cases, making a wholesale migration is out of the question, for reasons related to cost and complexity. There are also workloads that don’t make economic sense to refactor from operating in a fixed environment to a variable cost-based architecture and others with specific regulatory or industry obligations tied to data sovereignty or privacy that prevent a holistic migration strategy in embrace of public cloud.

The HPE GreenLake Advantage

Given the importance of the edge in the data modernization strategy, HPE seeks to remove any uncertainty regarding where to deploy applications and data. The HPE GreenLake edge-to-cloud platform brings the desired cloud-based operating model and platform experience, but with consistent and secure data governance practices, starting at the edge and running all the way to public cloud. This can be applied across any industry — such as retail, banking, manufacturing, or healthcare — and regardless of where the workload resides.

HPE GreenLake with the managed service offering is inclusive of all public clouds, ensuring a consistent experience whether data and applications are deployed on AWS, Microsoft Azure, or Google Cloud Platform as part of a hybrid mix that encompasses cloud in concert with on-premises infrastructure in an internal data center or colocation facility.

“IT teams want a unified solution they can use to manage all technology needs, from infrastructure as a service (IaaS) to platform as a service (PaaS) and container as a service (CaaS), that drive automation and orchestration that are not snowflakes,” says Lavista. “HPE GreenLake provides that standard operating model from edge to core and all the way through to the public cloud.”

By aligning with HPE GreenLake solutions, IT organizations also free themselves of the day-to-day operations of running infrastructure to focus on delivering core capabilities for business users as well as DevOps teams. The HPE GreenLake team works with organizations to assess which workloads are a better fit for cloud or edge, by evaluating a variety of factors, including technical complexity, system dependencies, service-level agreement (SLA) requirements, and latency demands. For example, a quality control system on a manufacturing line might be better suited for an edge solution, due to the need to analyze data in volume and in near real time. But an AI application that could benefit from a facial recognition service might be better served by public cloud for such service, given the broad ecosystem of available third-party services that eliminate the need to re-create the wheel for every innovation.

To ensure top performance, Lavista counsels companies to fully understand their core business objectives and to be pragmatic about their cloud migration goals so they avoid the trap of moving data and workloads simply because it’s the latest technology trend. “Understand your options based on where you are coming from,” he says. “If what you are looking for is to optimize the IT operating model, you can still get that without moving applications and data.”

For more information, visit https://www.hpe.com/us/en/greenlake/services.html.

Hybrid Cloud