When I was a CIO, I always dreaded the annual budget season because I knew, somewhere during the process, the CEO, my boss, would ask, “What are we getting for this constantly growing IT department.”

It’s a question that keeps most CIOs up at night when asked to defend IT investments, and it’s one all CIOs should expect to answer, given that IT expenditures can range from 1% to more than 50% of a company’s total revenue.

For most IT departments, this is a very difficult question to answer because the systems that we develop are not used by IT but are used by other departments to increase their sales, reduce their expenses, or be more competitive in the marketplace.

As such, an IT leader’s usual response to this question is a general statement about how IT has implemented projects across the corporation that have achieved corporate strategic objectives. We seldom have any empirical data to back up our claims. So what’s a CIO to do?

IT as a business

There are two ways to address this issue. The first option is to transition from a non-charge out environment, where IT absorbs all development costs, to a charge out environment where all IT costs are assigned to the user departments based on their use of the resources. In this case, IT operates as a zero-cost department and there are no annual budget issues. All IT has to do is tell the user departments how much to budget for IT.

But there are great downsides to this approach that far outweigh its ease of use for IT. First, this process tends to place the automation agenda into the hands of individual departments or profit centers rather than looking at IT and digitalization as an overall company necessity. An example of this would be the development of artificial intelligence systems. The ramifications of this sort of system would affect all departments.

Second, with a charge out system, IT sends each department a monthly bill charging their P&L for a range of services, including development costs, IT infrastructure usage costs, and the dreaded overhead costs. This bill presents a huge challenge to IT to maintain its cordial relationships especially if it is higher than the budget estimates.

Perhaps worse, shifting to charge out, or chargeback, approach treats IT like a business — a system that might sound good on the surface but means that the user department may begin to look to outside IT organizations to develop shadow IT systems that are sold as a cheaper alternative. These systems can only make internal system maintenance more complicated and drive a wedge through the company and its automation agenda.

The better way

The second and better way to approach the problem of IT value is to measure the effectiveness of the IT operation. Why should IT be the only department that is immune from corporate oversight? The advertising department is routinely measured on whether it is increasing corporate sales. HR is constantly being questioned on how its salary system compares to the industry. Manufacturing is always being challenged on its costs and if there are alternative methods and locations. Marketing must assure top management that its brand positioning is the best for the company.

The only way to measure IT is to enforce a requirement that all large scale new or modified system projects are analyzed, after completion, to verify that the objectives were met and the ROI was proven.

In my book The 9 1/2 Secrets of a Great IT Organization the 1/2 secret is the post-implementation audit. I called it a half secret because few companies do it. It should be treated as a full secret, however, because it will assure a much more effective and successful department. But it is not generally done for a number of reasons.

First, conducting a post-implementation audit requires a significant amount of analysis that is very detailed and can span several years. Just gathering the data can be time consuming especially given that many of the project personnel may have changed jobs or even companies since the project was completed.

Next, it cannot be done until at least a year after the system has gone live since no system is fully functional on day one. Sometimes it is hard to convince both IT and the user department that it is worth the time to analyze a completed system because there are more important projects to complete.

Moreover, the user department is often not interested in proving ROI for several reasons. Perhaps they inflated the initial ROI to get the attention of the IT steering committee. A close analysis may discover this practice. Additionally, the ROI may have contained significant headcount reductions that were used to generate a better return. The department may desire to forget these moves once the project is completed.

Of course, it’s not always about the user department. IT may also not want to see the audit done because it may have underestimated the cost or completion date on the original estimate.

The recommended way to complete this audit is to remove the responsibility from the user department and from IT. An independent organization, preferably under the auspices of the financial arm of the company, should conduct the post-implementation audit. This group should have been involved initially in developing the ROI for the project and are in the best place to assure objectivity in the result.

If done this way, the user department will be held to its ROI commitment, IT will be held to its performance objectives, and the CIO will be able to answer the question posed by the CEO about IT investments by saying, for example, “We implemented 17 projects this year which increased sales by 35% and reduced expenses by 14%.” 

Wouldn’t that be a great conversation to have, not only with the CEO but with the entire company. 

Business IT Alignment, IT Leadership

OpenAI has landed billions of dollars more funding from Microsoft to continue its development of generative artificial intelligence tools such as Dall-E 2 and ChatGPT. A move that is likely to unlock similar investments from competitors — Google in particular — and open the way for new or improved software tools for enterprises large and small.

Microsoft stands to benefit from its investment in three ways. As a licensee of OpenAI’s software it will have access to new AI-based capabilities it can resell or build into its products. As OpenAI’s exclusive cloud provider it will see additional revenue for its Azure services, as one of OpenAI’s biggest costs is providing the computing capacity to train and run its AI models. And as an investor it can expect some return on its capital, although this will be limited by OpenAI’s status as a capped-profit company governed by a nonprofit.

The deal, announced by OpenAI and Microsoft on Jan. 23, 2023, is likely to shake up the market for AI-based enterprise services, said Rajesh Kandaswamy, distinguished analyst and fellow at Gartner: “It provides additional impetus for Google to relook at its roadmap. It’s the same for other competitors like AWS,” he said.

Ritu Jyoti, IDC’s global AI research lead, sees more than just AI bragging rights at stake here. “There is a big battle brewing between the three hyperscalers — Amazon, Google, and Microsoft — and it’s not just about AI. It’s going to drive who’s going to be supreme in the cloud because this requires tons and tons of compute, and they’re all fighting with each other. It’s going to get ugly,” she said.

Employees are already experiencing some of that ugly: Since the start of the year, Microsoft, Amazon, and Google parent Alphabet have all announced massive layoffs as they seek to refocus on growth markets and invest in AI.

Billion-dollar brain

Rumors that Microsoft could invest as much as $10 billion to grow its AI business broke in early January. The company has been a supporter of OpenAI’s quest to build an artificial general intelligence since its early days, beginning with its hosting of OpenAI experiments on specialized Azure servers in 2016. In July 2019 it became OpenAI’s exclusive cloud provider and invested $1 billion in the company to support its quest to create “artificial general intelligence.” In 2020, Microsoft became the first to license OpenAI’s Generative Pre-trained Transformer (GPT) AI software for inclusion in its own products and services. Up to that point, OpenAI had only allowed enterprises and academics access to the software through a limited API.

Enterprises already have access to some of that technology via Microsoft’s Azure OpenAI service, which offers pay-as-you-go API access to OpenAI tools, including the text generator GPT 3, the image generator Dall-E 2, and Codex, a specialized version of GPT that can translate between natural language and a programming language. Microsoft is also offering Codex as a service in the form of GitHub Copilot, an AI-based pair programming tool that can generate code fragments from natural language prompts. And it will soon offer Microsoft 365 subscribers a new application combining features of PowerPoint with OpenAI’s Dall-E 2 image generator. That app, Microsoft Designer, is currently in closed beta test. And, of course, they can check out ChatGPT, the interactive text generator that has been making waves since its release in November 2022.

GPT-3.5, the OpenAI model on which ChatGPT is based, is an example of a transformer, a deep learning technique developed by Google in 2017 to tackle problems in natural language processing. Others include BERT and PaLM from Google; and MT-NLG, which was co-developed by Microsoft and Nvidia.

Transformers improve on the previous generation of deep learning technology, recurrent neural networks, in their ability to process entire texts simultaneously rather than treating them sequentially, one word after another. This allows them to infer connections between words several sentences apart, something that’s especially useful when interacting with humans who use pronouns to save time. ChatGPT is one of the first to be made available as an interactive tool rather than through an API.

Robots in disguise

The text ChatGPT generates reads like a rather pedantic and not always well-informed human, and part of the concern about it is that it could be used to fill the internet with human-sounding but misleading or meaningless text. The risk there — aside from making the internet useless to humans — is that it will pollute the very resource needed to train better AIs.

Conversing with ChatGPT is entertaining, but the beta version available today is not terribly useful for enterprise purposes. That’s because it has no access to new information or services on the Internet — the dataset on which it was trained was frozen in September 2021 — and although it can answer questions about the content of that dataset, it cannot reference its sources, raising doubts about the accuracy of its statements. To its credit, it regularly and repeatedly reminds users of these limitations.

An enterprise version of ChatGPT, though, refined to cope with an industry-specific vocabulary and with access to up-to-date information from the ERP on product availability, say, or the latest updates to the company’s code repository, would be quite something.

In its own words

ChatGPT itself, prompted with the question, “What uses would a CIO have for a system like ChatGPT?” suggested it might be used for automating customer service and support; analyzing data to generate reports; and generating suggestions and recommendations based on data analysis to assist with decision-making.

Prompted to describe its limitations, ChatGPT said, “Its performance can be affected by the quality and quantity of the training data. Additionally, it may not always be able to understand or respond to certain inputs correctly.” Nicely illustrating its tendency to restate the same point in multiple ways, it went on: “It is also important to monitor the performance of the model and adjust the training data as needed to improve its accuracy and relevance.”

As for Microsoft’s plans for OpenAI’s generative AI tools, IDC’s Jyoti said she expects some of the most visible changes will come on the desktop. “Microsoft will completely transform its whole suite of applications: Word, Outlook, and PowerPoint,” she said, noting that the integration of OpenAI could introduce or enhance features such as image captioning, and text autocompletion and the recommendation of next actions.

Gartner’s Kandaswamy said that he expects Microsoft, in addition to updating its productivity suite, to add new OpenAI-based capabilities to Dynamics and even properties such as LinkedIn or GitHub.

It’s important for CIOs to adopt these tools for the incremental value that they bring, he said, but warned: “Be very careful not to get blindsided by the disruption AI can produce over the longer term.”

Chief AI officers

Jyoti pinned some of the responsibility for AI’s effects on enterprises themselves. “People always tend to blame the technology suppliers, but the enterprises also have a responsibility,” she said. “Businesses, right from the C-suite, need to put together their AI strategy and put the right guardrails in place.”

For now, AI tools like ChatGPT or Dall-E 2 are best used to augment human creativity or decision-making, not replace it. “Put a human in the loop,” she advised.

It won’t be the CIO’s decision alone because the questions around which tools should be used, and how, are ethical as well as technical. Ultimately, though, the job will come back to the IT department. “They cannot ignore it: They have to pilot it,” she said.

Build, don’t buy

With few generative AI tools available to buy off the shelf for now, there will be a rebalancing of the build vs. buy equation, with forward-thinking CIOs driven to build in the short term, Jyoti said. Limited developer resources could achieve that sooner with coding help from tools like GitHub Copilot or OpenAI’s Codex.

Later, as ISVs move in and build domain-specific solutions using generative AI tools provided by OpenAI, Microsoft, and the other hyperscalers, then the pendulum may swing back to buy for enterprises, she said.

That initial swing to customization (rather than configuration) could spell big trouble for Oracle, SAP, and other big ERP developers, which these days rely on making enterprises conform to the best practices they embody in their SaaS applications.

“They have hardened the processes over so many years, but today AI has become data-driven,” Jyoti said: While the ERP vendors have been embedding AI here and there, “They’re not as dynamic […] and this will require a fundamental shift in how things can work.”

Artificial Intelligence, Chatbots, Microsoft, Technology Industry

Imagine an airport that uses computer vision to track errant luggage in real time, or a commercial kitchen able to detect refrigeration conditions and prevent spoilage. Imagine an amusement park outfitting its rides with sensors that can talk directly to operations for upgraded safety and better guest experiences. Imagine a factory or a chain of retailers reducing energy and cutting equipment downtime. 

These scenarios are not imaginary. They are playing out across industries with the help of edge computing, Internet of Things (IoT) devices and an innovative approach known as Business Outcomes-as-a-Service.[1]

In each case, the company has volumes of streaming data and needs a way to quickly analyze it for outcomes such as greater asset availability, improved site safety and enhanced sustainability. In each case, they are taking strategic advantage of data generated at the edge, using artificial intelligence and cloud architecture. And they’re achieving significant wins.[2]

Here, we explore the demands and opportunities of edge computing and how an approach to Business Outcomes-as-a-Service can provide end-to-end analytics with lowered operational risk.

From the Edge to the Cloud and Back

Computing at the edge and the far edge allows for data to be processed near the point where it’s generated. The speed and volume of data flowing, often in real time, from sensors and other IoT devices, comes with potential for enormous gains in business and operational intelligence. But this advancement also adds complexity. 

Most organizations still need methods for analyzing data at the point of conception so it can be acted upon immediately. Some have managed to derive meaningful, rapid and repeatable business outcomes from their IoT data streams and analytics using Business Outcomes-as-a-Service (Atos BOaaS), developed by Atos, an international leader in digital transformation. Already, Atos customers have reported positive experiences.

“For a retail customer, we’re talking about 66,000 hours saved in maintenance and compliance for maintaining the edge environment, which translates into about 480 metric tons of CO2 saved every year — thanks to automation and end-to-end monitoring,” said Arnaud Langer, Global Edge and IoT Senior Product Director at Atos.

Four Key Benefits of an End-to-End Analytics Service

As many tech and industry leaders are noting,[3] businesses are now prioritizing value and speed to deployment. Outcome-based solutions delivered in an as-a-service model allow companies to realize this rapid time-to-value. 

Those using a turnkey, scalable BOaaS platform are quickly able to manage an entire AI and IoT ecosystem from one dashboard, across the cloud, edge and far edge.[4] The solution allows them to generate value from real-time data using a platform for ingesting, storing and analyzing continuously streaming data. It’s bringing advanced analytics and AI capabilities where they’re needed most – the edge. Already deployed in commercial kitchens and retail chains, on factory floors and at amusement parks, the solution has shown the following benefits.

Value: Increased uptime of critical assets with predictive maintenanceSustainability: Reduced onsite support and carbon footprint with touchless operationsSafety: AI-enhanced computer vision for safer, efficient operationsCost-effectiveness: Full operational expense (OpEx) as-a-service pricing and operational model

For a manufacturer or retailer, for instance, an equipment or IT interruption would typically impact employees, customers and revenue due to the traditionally painful restoration process. But the monitoring BOaaS system reduces downtime by detecting interruptions before they occur so that remediation can happen before the equipment fails, while it is still running, and before any downtime is experienced – and the problem can often be resolved remotely. If immediate remedies are not possible, the system will alert staff then procure and ship a replacement part to arrive on site. Employees can securely connect to the platform and deploy the applications they need via the cloud, minimizing impact to business operations. 

Across industries, data streams often surpass the ability to capture and analyze information. By tapping into hundreds of untapped endpoints and millions of data points that were previously underutilized, the Atos system allows real-time innovations. For example, AI based predictive maintenance and computer vision to monitor all hardware—lowering support costs, decreasing IT complexity and driving decarbonization.

The Technology Behind Business Outcomes

It was a tall order for Atos: Harness the power of data by bringing together hardware, software, and AI in one OpEx solution. 

To most effectively develop BOaaS as a touchless, end-to-end managed service, Atos leveraged the compute and storage power of Dell Technologies. Atos chose Dell Technologies’ Streaming Data Platform[5] for its ability to deliver reliable, fast and secure data pipelines from edge to cloud.

“Using Dell Technologies solutions, we’ve already achieved a 10% reduction in downtime. This can save up to millions of dollars annually,” Langer said. In the future, we expect to triple that to 30% lower downtime, saving untold millions per customer, per location.”

Watch this video to learn more about how Atos and Dell are enabling game-changing innovation at the edge. 

[1] https://atos.net/en/2022/press-release_2022_05_04/atos-launches-innovative-edge-to-cloud-5g-and-ai-enabled-solution

[2] https://atos.net/en/portfolio/accelerate-decisions-with-data-for-better-business-outcomes

[3] https://www.enterprisetimes.co.uk/2022/12/30/business-and-technology-trends-for-professional-services-in-2023/

[4] https://www.engineering.com/story/bring-ai-and-automation-to-the-far-edge-of-the-factory-floor

[5] https://www.dell.com/en-us/dt/storage/streaming-data-platform.htm

IT Leadership

Just hear those phone lines jingling, ring ting tingle-ing too. 

After the last slice of Thanksgiving pie has been served, millions of people will rush out the door to score the best deals of the year. Then they’ll call: about shipping delays, return policies, order details, discount codes, you name it – on top of all the other inquiries your contact center normally handles.

Unless you put more tushes in cushions, you’ll only have so many people manning your phones. Here’s how to best manage the call spikes coming your way.

AI and automation for the win, right? Yes and no. Here’s my two cents…

Just because you can do something doesn’t mean you should.

People tell me that when they call a customer service number and hear an automated assistant, they go on the defense. I can relate. Am I going to have to fight this thing to get the outcome I want, and how long will it take? If you’re not confident enough that your customers would choose your automated assistant without being forced to use it, don’t invest in it. 

There are plenty other ways you can use AI and automation to manage demand (see below). If you’re set on an automated assistant, just know you can’t force customers to do things your way. Let them try your assistant if they choose, make sure it’s truly intelligent and intuitive to their needs, and always provide an out.  

Use AI to optimize the performance of the staff you have. 

You’ll be like a kid on Christmas morning when you see the magic of AI for task automation. Your agents will be freed from time-consuming, queue-clogging tasks nearly overnight. Consider customer authentication: you must verify the identity of each person who contacts your organization, right? This could be through knowledge-based authentication (ex: “What’s the city you were born in?”) or by verifying personally identifiable information like a customer’s address or SSN. Sometimes it’s all the above. 

What if you could use a customer’s hold time to handle this via an automated assistant so they can get right down to business when they’re connected to an agent? What if you used AI and automation to offer an even more effective form of customer authentication like a 3D photo scan? Calls would be processed faster and the agents you do have would work more efficiently. This is completely possible and not nearly as difficult as it seems. Consider how else you can use AI and automation internally to address the challenge of demand exceeding staff. 

There’s no place like (work from) home for the holidays

What’s great about remote work for contact center managers is the ability to source agents from anywhere in a tight labor market. If you must hire to scale for seasonal demand, you can look far and wide based on needs around skills and labor costs. What’s not so great about remote work is the lack of visibility for quality assurance and team monitoring. 

My opinion: adopt the remote work strategy of 2020 with technology that meets current Quality Assurance (QA) needs.  

It’s 2022: why can’t you remotely monitor calls in real-time as if you were walking the contact center floor? What’s stopping you from ensuring high Quality of Service (QoS) regardless of where your agents work or what their setup looks like?

Again, this is the beauty of AI:

AI-powered noise removal ensures crystal clear audio regardless of what’s happening behind the scenes.Agents can’t predict when their dog will bark or when their neighbor will be doing yard work. This is a game-changing AI feature that filters out distracting background noises – even an agent’s keyboard strokes – to ensure QoS from anywhere. 

Closed captioning automatically shows on an agent’s screen what a customer is saying in real-time to keep quality high and call processing fast. Even with background noise removed, you could easily miss a word or a key piece of information that you’d hate to ask the customer to repeat. 

Real-time translation is yet another way AI helps manage demand without needing to hire. Imagine all your agents essentially becoming bi-lingual. It’s a huge value-add! 

AI speech analytics monitors conversations in real-time and alerts managers when in-the-moment intervention may be necessary. Will this help improve CSAT and keep calls flowing? You bet.   

Follow these tips and from now on your troubles will be miles away. If you want to learn more about AI for the contact center, give me a shout or contact an Avaya expert here. Happy managing! 

IT Leadership

This article was co-authored by Duke Dyksterhouse, an Associate at Metis Strategy

Data & Analytics is delivering on its promise. Every day, it helps countless organizations do everything from measure their ESG impact to create new streams of revenue, and consequently, companies without strong data cultures or concrete plans to build one are feeling the pressure. Some are our clients—and more of them are asking our help with their data strategy. 

Often their ask is a thinly veiled admission of overwhelm. They struggle to even articulate their objective, or don’t know where to start. The variables seem endless: data—security, science, storage, mining, management, definition, deletion, integration, accessibility, architecture, collection, governance, and the ever-elusive, data culture. But for all that technical complexity, their overwhelm is more often a symptom of mindset. They think that when carving out their first formal data strategy, they must have all the answers up front—that all the relevant people, processes, and technologies must be lined up neatly, like dominos. 

We discourage that thinking. Mobilizing data is more like getting a flywheel spinning: it takes tremendous effort to get the wheel moving, but its momentum is largely self-sustaining; and thus, as you incrementally apply force, the wheel spins faster and faster, until fingertip touches are enough to sustain a blistering velocity. As the wheel builds to that speed, the people, processes, and technologies needed to support it make themselves apparent. 

In this article, we offer four things you can do to get your flywheel spinning faster, and examine each through the story of Alina Parast, Chief Information Officer of ChampionX, and how she is helping transform the company (which delivers solutions to the upstream and midstream oil and gas industry) into a data-driven powerhouse. 

Step 1: Choose the right problem 

When ChampionX went public, its cross-functional team (which included supply chain, digital/IT, and commercial experts) avoided or at least tempered any grandiose, buzzword-filled declarations about “transformations” and “data-driven cultures” in favor of real-world problem solving. But also, it didn’t choose just any problem: it chose the right problem—which is the first and most crucial step to getting your flywheel spinning. 

At the time, one of ChampionX’s costliest activities in its Chemical Technologies business was monitoring and maintaining customer sites, many of which were in remote parts of the country. “It was more than just labor and fuel,” Alina explained. “We had to spend a lot on maintaining vehicles capable of navigating the routes to those sites, and on figuring out what, exactly, those routes were. There were, and still are, no Google maps for where our field technicians need to go.” Those costs were the price of “keeping customers’ tanks full, not dry”– one of ChampionX’s guiding principles and the core of its value proposition to improve the lives of its customers. “And so, we wondered, ‘how can we serve that end?’” 

  The problem the team chose to solve—lowering the cost of site trips—might appear mundane, but it had all the right ingredients to get the flywheel moving. First, the problem was urgent, as it was among ChampionX’s most significant expenses. Second, the problem was simple (even if its solution was not). It was easy to explain: It costs us a lot to trek to these sites. How can we lower that cost? Third, it was tangible. It concerned real world objects—trucks, wells, equipment, and other things people could see, hear, or feel. Equally important, the team could point to the specific financial line items their efforts would move. Finally, the problem was shared by the enterprise at large. As part of the cross-functional leadership team, Alina didn’t limit herself to solving what were ostensibly CIO-related problems. She understood: if it was a problem she and her organization could help solve, then it was a CIO-related problem. 

IT executives talk often of people, processes, and technology as the cornerstones of IT strategy, but they sometimes forget to heed the nucleus of all strategy: solving real business problems. When you’re getting started, set aside your concerns about who you will hire, what tools you will use, and how your people will work together—those things will make themselves apparent in time. First get your leaders in a room. Forego the slides, the spreadsheets, and the roadmaps. Instead, ask, with all sincerity: What problem are we trying to solve? The answer will not come as easily as you expect, but the conversation will be invaluable. 

Step 2: Capture the right data 

Once you’ve identified a problem worthy of solving, the next step is to capture the data you need to solve it. If you’ve defined your problem well, you’ll know what that data is, which is key. Just as defining your problem narrows the variety of data you might capture, figuring out what data you need, where to get it, and how to manage it will narrow the vast catalog of people, processes, and technologies that could compose your data environment. 

Consider how this played out for Alina and ChampionX. Once the team knew the problem—site visits were costly—they quickly identified the logical solution: Reduce the number of required site visits. Most visits were routine, rather than in response to an active problem, so if ChampionX could glean what was happening at the site remotely, they could save considerable time, fuel, and money. That insight told them what data they would need, which in turn allowed ChampionX’s IT and Commercial Digital teams to discern who and what they needed to capture it. They needed IoT sensors, for example, to extract relevant data from the sites. And they needed a place to store that data—they lacked infrastructure that could manage both the terabytes pouring off the sensors and the coupling customer data (which resided within enterprise platforms such as ERP, transportation, and supply & demand planning). So, they built a data-lake.  

Each of these initiatives—standing up secure cloud infrastructure, the design of the data lake, the sensors, the storage, the necessary training—was a major undertaking and is continuing to evolve. But the ChampionX team not only solved the site-visit problem; they provided a foundation for the company’s data environment and the data-driven initiatives that would follow. The data lake, for example, came to serve as a home for an ever-growing volume and variety of data from ChampionX’s other business units, which in turn led to some valuable insights (more on that in the next section). 

Knowing what data to capture provides the context you need to start selecting people, tools, and processes. Whichever you select, they will lend themselves to unpredictable ends, so it’s a taxing and fruitless exercise to try and map every way in which one component of your data environment will tie to all others— and from that, to choose a toolkit. Instead, figure out what you need for the problem—and the data—in front of you. Because you’ll be making selections in relation to something real and important in your organization, odds are, your selections will end up serving something else real and important. But in this case, you’ll be able to specify the names, costs, and sequencing of the things you need—details that will make your data strategy real and get your flywheel spinning faster. 

Step 3: Connect dots that once seemed disparate 

As you begin to capture data and your flywheel spins faster, new opportunities and data will reveal themselves. It wasn’t long after ChampionX’s team had installed the IoT sensors to remotely monitor customer sites that they realized the same data could be applied elsewhere. ChampionX now had a wealth of topographical data that no one else did, and it used this data to move both the top and the bottom lines. It moved the bottom line by optimizing the routes that ChampionX’s vehicles took to sites—solving the no-Google-Maps-where-we’re-going problem—and it moved the top by monetizing the data as a new revenue stream. 

The data lake, too, took on new purpose. Other business initiatives began parking their data in it, which prompted cross-functional teams to contemplate the various kinds of information swirling around together and how they might amount to more than the sum of their parts. One type was customer, order, and supply chain data, which ChampionX was regularly required to pull and merge with site data to perform impact analyses—reports of which and how their customers were affected by a disruption in supply chain networks. Merging those data used to take weeks, largely because the two data had always lived in different ecosystems. Now, the same analyses took only hours. 

There are two takeaways here. The first is that it’s okay if your data flywheel spins slowly at the start—just get it going. Attracting even a few new opportunities or types of data will afford you the chance to draw connections between things that once seemed disparate. That pattern recognition will speed up your flywheel at an exponential rate and encourage an appropriately complex data environment to take shape around it. 

The second takeaway is similar to those of the first two steps: Choose wisely among the opportunities you could pursue. Not every insight that is interesting is useful; pursue the ones that are most valuable and real, the ones people can see, measure, and feel. These will overlap significantly with tedious and banal, recurring organizational activities (like pulling together impact reports). If you can solve these problems, you will prove the viability of data as a force for change in your organization, and a richer data culture will begin to emerge, pushing the flywheel to an intimidating pace. 

Step 4: Build outward from your original problem 

The story of ChampionX that we’ve examined is only one chapter of a much larger tale. As the company has collected more data and its people gleaned new insights, the problems that Alina and her business partners take on have grown in scope and complexity, and ChampionX’s flywheel has reached a speed capable of powering data-first problem-solving across the company’s entire supply chain. 

Yet, most of the problems in some way trace back to the simple question of how the company might spend less on site-checks. ChampionX’s team has not hopped willy-nilly from problems that concern the supply chain to those that concern Marketing, or HR, or Finance; the team is expanding outward in logical progression from their original problem. And because they have, their people, processes, and technologies, in terms of maturity, are only ever a stone’s throw from being able to tackle the next challenge—which is always built on the one before it. 

As your flywheel spins faster, you will have more problems to choose among. Prioritize those that are not only feasible and valuable but also thematically consistent with the problems you’ve already solved. That way, you’ll be able to leverage the momentum you’ve built. Your data environment will already include many of the people and tools you need for the job. You won’t feel as if you’re starting anew or have to argue a from-scratch case to your stakeholders. 

Building a data strategy is like spinning a flywheel. It’s cyclical, iterative, gradual, perpetual. There is no special line that, if crossed, will deem your organization “data-driven.” And likewise, there is no use in thinking of your data strategy as something binary, as if it were a building under construction that will one day be complete. The best thing you can do is focus on using your data to solve problems that are urgent, simple, tangible, and valuable. Assemble the people, processes, and technologies you need to tackle those problems. Then, move onto the next, and then the next, and then the next, allowing the elements of a vibrant data ecosystem to emerge along the way. You cannot will your data strategy into existence; you can only draw it in, by focusing on the flywheel. And when it appears, you, and everyone else, will know it. 

Analytics, Data Management

To create a more efficient and streamlined enterprise, businesses often find themselves tempted to bring in brand new systems that promise major improvements over the status quo. This can be a viable strategy in some cases – and it will impress stakeholders that prefer to shake things up. But it comes at a cost. Swapping out the old for the new will require heavy doses of training to get everyone up to speed, and that’s just for starters.

There’s another option – optimize what you currently have. In other words, your current systems may not deliver the best possible experience for your users, but you can change that.

Before deciding whether to acquire entire new systems or software, businesses should take full stock of existing systems and processes. The goal: to effectively understand where efficiencies can be created by just re-tooling what you already have.

This blog will examine how enterprises can take stock of their systems, and offer best practices for IT teams with small budgets looking to re-tool. We’ll also provide examples of rejigging a system to improve performance.

The Importance of Self-Analysis

As noted earlier, the first step involves taking a full, 360-degree view of internal systems to determine hurdles and how best to overcome them. Getting a download on the overall performance and status of systems can paint the picture of where processes are bogging down, and the complex reasons behind bottlenecks that create inefficiency.

This also includes asking questions such as: Is this technology being under- or over-utilized? Is it dormant or active? Is it still commissioned? Is it scheduled to be decommissioned? How much space is it taking up? These questions will help inform next steps: how to either move on, or re-tool for improved efficiency.

After conducting a full assessment of internal systems, teams must talk to their front-line workers and business users about what’s working and what’s not. It’s important to remember that a lot of IT and business leaders aren’t familiar with the day-to-day operations, meaning they are removed from the daily snarls and issues that users face. This can create knowledge gaps where leadership thinks a system is performing fine. But the frontline end user is dealing with a whole host of issues, such as bugs or system failures. This is what makes communication so important. 

Creating a culture of communication between frontline workers and leadership is paramount, as the end users can act as a real-time insight pipeline. One way to achieve this culture is to host regular standup meetings with employees, led by people who are already experts in using the systems. This allows direct access to a subject matter expert who is well-versed in the technical details, and who can offer ideas that may not have been tested before.  

Re-tooling Underway

After the thorough analysis is complete, IT teams can get to work re-tooling their systems to work in a more efficient, streamlined manner. Any processes and systems that directly impact customers and revenue should be prioritized, as they’re often handling the most crucial datasets for a business.

After re-tooling, it’s crucial to test the current performance against the previous performance to establish benchmarks. While in development, there should always be a testing/beta phase where the old and new processes work concurrently; this ensures feedback around efficiency and what is accepted among users. Often, businesses will introduce a new system without testing it alongside an existing process. This creates a situation where there is no benchmark, and the business may run into the same challenges as they had before.

If the re-tooled process is customer-facing, then the same questions that are asked internally regarding performance should also be asked to the consumer. Feedback from customers is essential, as they could move to a competitor if they’re unhappy with the new process. 

Getting by on Small Budgets

Smaller businesses with tighter budgets are often more likely to undertake re-tooling. Larger organizations typically have more capital to spend on new software and other services, but the smaller organizations often must get work with what they have. 

One example: a customer that has decommissioned nodes and is looking to increase storage capacity. The company completed the assessment, realized their nodes are taking up physical space, and determined they need more storage. After the nodes have been decommissioned and the compute power removed, one way to re-tool and re-use that system is to employ a software-based storage solution. This way the company only pays for software licenses that run the storage solution off the decommissioned hosts.

As the current solution was constrained by space and only utilized as a short-term backup solution, the new storage solution helped store long-term backups to meet postponed backup compliance standards. This new idea also enhanced the company’s current solution by improving throughput and reducing network bandwidth for end users. Creative thinking and self-analysis like this can help companies of any size – particularly smaller ones with smaller budgets.

Taking an honest look at what you have and re-tooling systems is a surefire way for businesses to continue driving innovation without breaking the bank. As businesses push to transform their operations, the time is now to look within at your systems and find ways to drive efficiency by re-tooling.

Learn more about HPE PointNext Tech Care here.

___________________________________

About Kyler Johnson

Kyler Johnson is a Master Technologist in HPE Pointnext, Global Remote Services. He has worked in technology for 10 years with a focus on innovation, automation, and consulting. Kyler strives to ensure his customers are highly satisfied for the present and future. When he is not focused on the customer, he enjoys reading, horseback riding, and gaming.

Enterprise, HPE, IT Leadership

Enterprises have dramatically increased their use of cloud in the last two years – and cloud has emerged as a positive force for change. But there are still barriers to adoption.

More than 50% of organizations surveyed increased their usage of cloud because of the pandemic, and 92% accelerated faster than anticipated, according to Microsoft’s Hybrid & Multicloud Perceptions Survey.[1] Furthermore, more than 60% of customers plan further increases over the next 12-24 months, the survey found.

And yet Gartner reports that 50% of organizations surveyed will delay migration to cloud due to lack of insufficient cloud IaaS skills.[2]

The list of hybrid and multi-cloud challenges facing IT and business leaders is a long one:

Managing the existing IT estate and the complexities of a hybrid, multi-cloud landscapeExpanding and changing security, compliance, and resiliency requirementsIntegration and data management requirements across the landscapeGovernance across the expanded ecosystem

In a word, the leading practices of yesterday and the requirements of tomorrow – from emerging expectations around remote work to accelerated technological adoption – are far from aligned.That’s where Kyndryl can help, offering end-to-end services to accelerate the cloud modernization journey. Capabilities include:

Minimize risk and improve success with an integrated building-block approachFocus on security at the forefront of every engagementPartnering to extend ecosystems for better outcomesTackle pressing business objectives in modern, digital ways via Kyndryl/Microsoft Joint Innovation LabDeep industry expertise and thousands of person-years’ experienceModern automation, operations, management, and governance capabilitiesAdvanced delivery: intelligent operations with automated and standardized processes across hyperscaler native and traditional IT

Kydrynl follows a 3-stage process:

Cloud consulting: Cloud workload assessment, strategy, and architecture services for plan and buildout of cloud solutions. Also offering cloud migration and modernization services.Hybrid and multi cloud services: Provision, monitor, and manage IaaS, PaaS, and/or Container Platforms on any cloud environment (public, private, hybrid, or multi).Modern Operations with Kyndryl Cloud Management Platform: Platform capabilities to provide improved operational discipline through financial governance and accountability for IT consumption on hyperscalers or private cloud.  Preventive and predictive fault / incident management with AIOps.

Interested? Click here to learn more and download the Kyndryl Global Practice Overview.

[1] https://blogs.microsoft.com/wp-content/uploads/prod/2022/01/Microsoft-Cloud-Survey-Results-Final.pdf

[2] https://www.gartner.com/smarterwithgartner/4-trends-impacting-cloud-adoption-in-2020

Cloud Management

With cloud becoming mainstream, customers and cloud providers have been seeking ways to optimize operational efforts and pay only for what they use. This need gave birth to the disruptive paradigm of serverless computing. AWS took the early lead back in 2014-15 with the launch of Lambda functions, creating a euphoria of interest in the IT world. So, what’s the promise, and where does serverless stand today?

While the initial hype has receded, serverless computing has continued to emerge. Cloud adoption has surged, but the true potential extends beyond cloud.

The promise of serverless

Serverless computing is an entire ecosystem of cloud services and functions, such as message queues, databases, logging, and authentication – all in a service version. It is not just Functions as a Service (FaaS).

The serverless architectural paradigm promises that all computing resource allocation, resource management, high availability, and fault tolerance will be handled by the cloud provider on behalf of the user. The consumer is not required to provision or scale any of the back-end servers, virtual machines, or platform services normally needed to run their code.

There are subtle bottlenecks and also ‘show-stoppers’ for its adoption, however, which must be addressed decisively from a technological perspective.

The current state of serverless

Serverless is not necessarily the right choice for every use case. Using a serverless stack is not always a cost-saving option. Certain workloads require substantial computing resources, which makes the serverless model less cost-effective.

The way we see it, there will be steady growth in serverless adoption along with steady decline in dedicated server computing. However, we believe that both these computing technologies will co-exist for a long period.

Moving to a fully serverless architecture requires an organization to commit to a cloud provider and know it well enough to gain the desired results. Most large enterprises are not yet ready to fully support this transition.

What is TCS doing

At TCS, we are helping businesses seamlessly adopt cloud computing and go serverless. Here are some examples:

Serverless Enterprise Applicability & Sustainability Model (SEASM): Customers today are often uninformed about what can go serverless, how, and when. Within TCS, we have come up with a Serverless Enterprise Applicability & Sustainability Model (SEASM). This assessment model captures the current state, short-term goals, and long-term goals of the IT infrastructure. Business-specific needs are also considered.  Further drill-down gives details of what is feasible and areas for prioritization. As technology evolves, the model will also evolve. Using this instrument, customers can carry out a periodic assessment (once or twice a year) and plan a realistic migration to serverless.Creating serverless variants of prominent software platforms: Of all the prominent software platforms available today, only around 10% have serverless variants. We in TCS have started making a conscious effort to create serverless variants for the remaining 90% of the software platforms available. The objective is to address the open-source versions first, followed by complementary efforts of software vendors and cloud service providers.

The future of serverless

Ultra-agile businesses need serverless applications. Today, each cloud service provider is coming up with their own serverless technologies, which by design are mostly incompatible with each other. Also, a very limited number of software platforms have serverless variants available.

This creates an extraordinary gap, as well as a big opportunity. The initial hype of serverless computing is evolving into a more realistic stage, where customers can understand the risk-reward and cost-benefit angles practically. The dedicated server era has evaporated, and now is the time for serverless computing to seize the space.

Author Bio

TCS

E-mail: nandkishor.mardikar@tcs.com

Nandkishor Mardikar is an Enterprise Architect and a proven Innovation Evangelist at AWS Cloud unit of TCS. Nandkishor & team helps enterprises in full stack application migration and modernization strategies, solutions to ease out complex migration activities, and refactoring efforts, which helps save precious time and effort for customers on application migration. Complex technology domain areas of Integration Middleware, BI/Analytics, Big Data, NoSQL, and overall data migration with improved cloud native-ness are his core service areas. He holds 7 patent grants to his credit, in various digital areas across multiple jurisdictions. In his 30+ years of illustrious IT career, Nandkishor has performed various roles as Head – Technology Excellence, Chief Architect, QA Manager, Relationship Manager, etc.

To learn more, visit us here.

Cloud Computing

Among managed services providers, (MSPs), comdivision stands out for many reasons, among them the depth of the company’s work with VMware. Not only is comdivision a VMware Principal Partner that earned the distinguished VMware Cloud Verified distinction, but it’s also the only MSP globally to have earned all eight VMware Master Services Competencies.

Designed to recognize partners with deep expertise in specific VMware solutions areas, each Master Services Competency requires MSPs to attain advanced certifications, not just for the company, but for a set number of employees. References from customers are also required to demonstrate high-level service capabilities and performance.

Earning even one VMware Master Services Competency is difficult – comdivision earned Master Service Competencies in Cloud Management and Automation, Cloud-Native Apps, Data Center Virtualization, Digital Workspace, Network Virtualization, VMware Cloud on AWS, VMware Cloud Foundation, and Software-Defined Wide Area Network, (SD-WAN).

We recently caught up with the Yves Sandfort, CEO of comdivision, to learn how he defines success as an MSP and how the company approaches its client engagements. We also took the opportunity to learn about Yves’s views on the role of IT and where he sees the greatest opportunities for enterprises to improve their relationships with MSPs and professional services partners.

“In comparison to many MSPs, we’re small,” says Sandfort. “But what we lack in size, we make up for in agility. It’s a real differentiator for us. For example, last year we were approached by the CIO of a German car manufacturer that needed to rapidly deploy a Desktop-as-a-Service Solution. We weren’t the only services provider capable of doing the work, but we were the ones that could do it the fastest and do it well. Speed and agility are the hallmarks of success of very experienced, highly knowledgeable, small teams.”

Notably, comdivision offers a full portfolio of highly customizable IT solutions and services. These include everything from the design of next-generation data centers to the development of cloud-native applications and the migration and management of multi-cloud deployments – all backed up by a full range of training services overseen by experienced and certified experts.

Based in Muenster, Germany the company has offices in Munich, and in Vienna, Austria, as well as in the U.S. in Tampa, Florida. Customers around the world rely on comdivision’s multi-faceted and proven cloud services.

“We are extremely flexible so that every customer decides the level of management they want us to oversee,” adds Sandfort. “We approach the cloud in much the same way. We help our customers determine which cloud makes the most sense for them whether that’s a private, public or multi-cloud approach.”

Sandfort stresses that flexibility is a crucial aspect of any client and MSP relationship.

“A managed service should by its very nature be a customized solution, not one hindered by an unchangeable off-the-shelf product or forced into a three-year contract the customer can’t change,” he says. “The pandemic and our collective experience over the past few years showed that IT must adapt to changes quickly. An MSP should help them do that.”

He adds that it’s crucial that enterprises remember that IT is a means to an end, not a result.

“In the end, it’s all about building a solution that matches and completes the customer’s IT and business requirements. That means identifying tasks that do not directly influence business value that can be managed externally so internal IT teams can focus on what really differentiates their business. Too often organizations fail to focus on IT that differentiates their business and helps it grow. For example, if you’re a furniture retailer you may not do better by creating your own exchange from scratch, but you may benefit from a more efficient point-of-sale system that uses the cloud and changes the way you do business. Business needs and desired outcomes should drive IT, not the other way around. Most lackluster or failed client and MSP relationships reflect a disconnect on this point.”

Not surprisingly, comdivision is growing – it is already a multi-million dollar business that is seeing 100% year-over-year growth – but when asked what he enjoys most about his work, Sandfort says it’s the same thing that first led him to comdivision more than 25 years ago.

“I still love it when a client or prospect approaches us with something they believe isn’t doable,” he says. “That’s when we can really shine as a team, think outside of the box and help them achieve their goals. It’s even sweeter when you go the extra strep and deliver an extraordinary customer and user experience.”

Learn more about comdivision and its partnership with VMware here or listen to Kathleen Tandy’s interview with Yves on the VMware Partnership Perspectives podcast.

Managed Service Providers, VMware