By Serge Lucio, Vice President and General Manager, Agile Operations Division

This is a continuation of Broadcom’s blog series: 2023 Tech Trends That Transform IT.  Stay tuned for future blogs that dive into the technology behind these trends from more of Broadcom’s industry-leading experts.

Enterprise networks are undergoing a profound transformation. These changes are being driven by growing SaaS adoption, increasing workload migration to the cloud, and the need to support the expanding number of employees who work-from-anywhere.

Traditional enterprise wide area networks, or WANs were designed primarily to connect remote branch offices directly to the data center. They rely on centralized security performed by backhauling traffic through the corporate data center, which impairs application performance and makes them expensive and inefficient. More importantly, WANs lack the flexibility and scalability that digital business requires.

Unlike traditional enterprise WANs, Software-Defined Wide Area Network, or SD-WAN technology meets the complex requirements for fast, reliable access to cloud-based resources. For example, SD-WAN technology makes it possible for an enterprise employee to successfully connect to Microsoft 365 from home. Policy-based routing dynamically determines the best path for optimal performance as it traverses multiple internet service providers and systems for network access, SASE secured connectivity, and cloud network access, before reaching the data center server where Microsoft 365 is running.

But SD-WAN’s reliance on the Internet can introduce new challenges, and new requirements for network observability and monitoring. Every ISP and system in the complex network path between users and cloud-based resources is a potential point of failure, most of which enterprise network operations teams do not own, manage, or even have visibility into.

On January 25, a minor error in a routine configuration change to a router at Microsoft caused a global network outage. This one minor error resulted in widespread connectivity issues for 90 minutes, leaving customers unable to reach Microsoft Teams, Outlook, SharePoint, and other applications. Situations like this can present a troubleshooting nightmare for enterprise network operations teams who need to address complaints from users, but don’t have complete end-to-end visibility of the entire network path from the user to the data center.

Looking Ahead: How I See Network Operations Evolving

I believe that in 2023, SaaS adoption, workload migrations to cloud, and work-from-home initiatives will continue to drive enterprise network transformation. The internet will become an even more integral component of the enterprise networks as organizations continue to augment or replace their legacy WANs by using SD-WAN technology to build high-performance chains of connectivity from lower-cost and commercially available Internet access.

As enterprises continue to transform and modernize their networks to better meet the needs of digital business, they will need a new approach to network observability, and requirements for  in-depth analysis and actionable insights will become increasingly critical.

In 2023 and beyond, effective network operations (NetOps) will demand more extensive coverage of user experience metrics than ever before. Network monitoring needs will expand beyond traditional managed networks to encompass unmanaged third-party networks. Experience-driven NetOps approaches will proliferate and become more tightly aligned with the network. Here’s more on how this transformation will progress.

User Experience Monitoring Will Become Imperative

In response to the increasingly complex connectivity demands of digital business, network architectures continue to evolve, and user experience monitoring has become an essential data source for NetOps. This is not surprising, since customer satisfaction and employee productivity remain in the top three business priorities for many organizations. For the network team, it’s no longer just about the traditional approach to monitoring network health. Teams need real-time insight into the state of the network and how changing network conditions are affecting  user experience so they can react quickly and ensure the delivery of consistent and high-quality network services to support that digital business success.

Monitoring Will Expand to the Edge and Beyond

As digital transformation goes to full throttle, network operations must align with the business more quickly and closely. With nearly half of enterprise workloads projected to be deployed in cloud infrastructure this year, NetOps team responsibilities will extend to both the networks they own and the ones they don’t – including third-party networks like home networks, ISP networks, and cloud environments. This extension will address the visibility and control blind spots that teams confront with cloud and multi-cloud networking.

Experience-Driven Approaches Will Advance

Teams’ increasing need for better visibility and control of both managed and unmanaged networks will drive adoption of Experience-Driven Network Observability and Management solutions and approaches. With this approach, the network team can understand, manage, and optimize the performance of digital services, regardless of the network they may be running on and gain visibility into every communication path and degradation point for the entire user experience delivery chain.

The adoption of experience-driven approaches will increase as organizations seek to improve their ability to monitor and measure the user experience. Gartner expects that by 2026 at least 60% of I&O leaders will use Digital Experience Monitoring to measure application, services and endpoint performance from the user’s viewpoint, up from less than 20% in 2021.

Applying active and continuous measurements can help network teams dramatically improve the network operations workflow. With these capabilities, they can effectively reduce false alarms, validate change and compliance, establish reliable visibility, and boost automation.

Monitoring Will Fuse Better With the Network

Experience-driven network monitoring tools and practices will become a seamless part of the network, translating volumes of disparate data (across network device performance, network and internet paths, alarms/faults/logs/configs, Cloud and SaaS application performance, network traffic flows and user experience metrics) into actionable insights about the current and future state of a network.

Outcomes That Matter

Moving forward, NetOps teams will be expected to deliver more value for the business, so they need to actively monitor and manage the network. By doing so, they’ll gain the detailed intelligence and actionable insights they need to assure network service delivery, and help the business reduce risks, optimize cost and resource efficiency, and boost revenue opportunities.

By delivering the right insights to the right team, they can quickly find and fix issues to improve mean time to resolution (MTTR) of network issues or prove the innocence (MTTI) of the network, while enabling the right team to address the issue. NetOps teams will then be able to proactively prevent network problems before they degrade user experience and derail the business.

Your Next Steps

This year, your organization’s success will be increasingly reliant upon the success of transformation initiatives in such areas as cloud, SaaS, and digitization. The question then becomes “How do you know if your network is ready for the emerging demands of the digital business?” NetOps teams play a critical role in helping these initiatives – and the business – succeed in 2023. Having unified insights to relevant network and digital experience metrics allows these teams to ensure that modern networks deliver optimized user experiences.

Broadcom can help boost your organization’s ability to manage evolving requirements for modern network technologies and support your current and future transformation initiatives. Visit our Experience-Driven NetOps page  to learn more about how we are helping enterprise NetOps teams around the world to break down monitoring data silos, expedite issue remediation, and reduce operational complexities.

To learn more about how Broadcom predicts that in 2023, effective network operations (NetOps) will demand better end-to-end visibility, including more extensive coverage of user experience metrics than ever before, read the report here

About Serge Lucio:

Broadcom Software

Serge Lucio is Vice President and General Manager of the Agile Operations Division at Broadcom. He is responsible for the company’s software solutions that help organizations to accelerate digital transformation and drive organizational agility.

IT Leadership, Networking

Companies’ core systems, business applications, and hosting environments all depend on the integrity of the file feeds they process — no matter the industry. When enterprises don’t effectively monitor their file feeds, damaged files can go undetected, and serious business consequences can — and do — occur.

This was the case for the U.S. Federal Aviation Administration (FAA) during the crucial winter holiday season of 2022. A corrupt file found also in the FAA backup system ultimately led to thousands of flight delays and cancellations across the U.S. — and many unhappy customers.

The glitch, however, could have led to an even more dire outcome as the problem was in the central database, which also maintains Notices to Air Missions, which inform pilots of issues along their course and at their destination.

The system meltdown is a stark reminder for organizations that it’s critical to evaluate IT infrastructure to prevent known vulnerabilities like easily corrupted files. Monitoring to make sure files are where they need to be, and when, must be a key part of infrastructure to ensure the integrity of feeds — for anything from financial services companies to hospitals, says Rahul Kelkar,Vice President and Chief Product Officer at Digitate.

“The problem of not having the right files available at the right time, in the right format, and the right size is very legitimate, and it’s across most business industries,” Kelkar says. For example, in financial services, not being on top of file monitoring could lead to a delayed loan, a lost customer, tarnished reputation, and ultimately an impact on profit.

Feeds become more vulnerable as files move through an enterprise. Delays in file feeds, manual errors, or missing data could cause a glitch and impact businesses adversely.

Traditionally, enterprises have approached this issue by putting in place a file-watching tool to monitor file feeds. A missing file raises a red flag Multiple alerts might sound simultaneously, raising the risk that some go unnoticed. “This is not something that’s very helpful,” adds Kelkar.

“It’s very reactive and it doesn’t close the loop. The pain has already started by the time you get an alarm.”

Instead, companies need a more flexible, reliable, holistic, and streamlined tool that takes business cycles and seasonality into account. They need a complete, scalable solution that identifies an issue and remediates it before it becomes a problem.

“Just detecting is insufficient now,” says Kelkar. The ideal solution, he adds, is a closed-loop autonomous system.

Part of Digitate’s ignio™ suite, the AI-based Business Health Monitoring Solution for File Feed is a proactive health monitoring solution that checks the health of file feeds across the organization and automatically diagnoses and resolves issues.

A dashboard provides end-to-end visibility and insight across business functions and helps with early detection of issues based on historical events while prescribed or preventive actions improve the mean time to detect and resolve.

One Digitate customer in the financial space was struggling to complete key business processes when critical files weren’t delivered on time. With Digitate’s solution, they greatly streamlined how files are sent, checked, and received through various regions.

The dashboard gave the client a holistic picture and automated the process of detecting and remediating missing files. “Customer satisfaction went up, and they realized efficiencies,” Kelkar says.

In today’s rapidly changing IT landscape, file monitoring is vital to smooth business operations. Successful organizations have effective file feed monitoring with a closed-loop autonomous system as an integral part of their unified IT approach.

To learn more about Digitate ignio, visit Digitate.

Data Management

Over the last decade, many organizations have turned to cloud technologies on their journey to become a digital business. The advantages of multi-cloud are well-documented: efficiency, flexibility, speed, agility, and more. Yet without consistent, comprehensive management across all clouds – private, hybrid, public, and even edge – the intended benefits of multi-cloud adoption may backfire. Increasingly, multi-cloud operations have become a priority for organizations to successfully negotiate cloud adoption.

Today, not only are competitors more aggressive and margins tighter, but partners, customers, and employees also have higher expectations and greater demands. As organizations continue to adapt and accelerate service delivery, they need to look to modern management solutions to simplify and speed  access to the infrastructure and application services teams need, when they need them – without increasing business risk. By deploying solutions that offer comprehensive capabilities with a common control plane, organizations can unify their multi-cloud operations to improve infrastructure and application performance, gain visibility into costs, and reduce configuration and operational risks.

Multi-Cloud Management Challenges

Almost every digital business today faces the issue of complexity when it comes to managing its multi-cloud operations. As organizations have moved away from managing single data centers to managing hybrid and native public clouds, the scope and scale of management have exploded into countless moving parts. Organizations can face complexity due to siloed infrastructure, user access policies, multiple APIs, billing, and lack of a formal operations plan that ensures processes and security remain consistent across multiple clouds.

With so many moving parts, organizations can also face siloed teams that in many cases have to contend with different cloud constructs, including different definitions of the infrastructure services provided by those clouds and different policies for security and compliance. In addition, teams often deploy different tools to manage their various clouds. A fragmented approach to operational priorities for cloud services makes it difficult to get a holistic view of how an organization uses cloud services, shares best practices, and ensures sufficient governance. Not only can disconnected operations impede an organization’s ability to run efficiently, but also impact their ability to troubleshoot problems and recover from outages quickly. Siloed teams and the sprawl of management solutions can also result in a lack of visibility into cloud costs.

As we enter an uncertain market in 2023, organizations face increasing pressure to control their cloud spending. Complexity is the enemy of cost optimization. As a result of different pricing structures across different cloud provider services and siloed teams, organizations are often struggling to predict future spend. In fact, according to a recent VMware study, more than 41% of organizations want to optimize their existing use of cloud to save on costs.

The Benefits of Unifying Multi-Cloud Operations

As organizations mature in their multi-cloud management strategies, they increasingly recognize that success depends on having comprehensive visibility. Organizations cannot manage what they can’t see. Although management solutions have been around for a very long time, comprehensive visibility has yet to be fully appreciated or achieved by businesses.

Cloud management powers your cloud operating model by helping you manage, control, and secure your cloud environments. A unified approach to multi-cloud management enables consistent and seamless operations across clouds, simplifying cloud adoption, streamlining app migrations, and accelerating modernization. An effective, efficient cloud operating model today requires a modern management solution such as VMware Aria. By bringing together a comprehensive visual map of multi-cloud environments with actionable management insights, VMware Aria enables organizations to leverage end-to-end intelligent operations across clouds to improve performance, lower costs and reduce risk.

As we enter a year of uncertainty, agility is the key to resilience and growth, and no matter where organizations are at in their cloud journey, a cloud operating model rolled out with modern, comprehensive, multi-cloud management solutions will bring consistency and efficiency to managing all types of clouds. More importantly, it will allow business leaders to quickly adapt, respond, and innovate on the fly which will prove critical to staying competitive.

To learn more, visit us here.

Cloud Computing

Customers are increasingly demanding access to real-time data, and freight transportation provider Estes Express Lines is among the rising tide of enterprises overhauling their data operations to deliver it.

To fuel self-service analytics and provide the real-time information customers and internal stakeholders need to meet customers’ shipping requirements, the Richmond, VA-based company, which operates a fleet of more than 8,500 tractors and 34,000 trailers, has embarked on a data transformation journey to improve data integration and data management. Like many large organizations, prior to this effort, data at Estes Express Lines was spread across disparate data sources, which meant that each agile project team had to write its own code to access data from those source systems.

“Besides impacting customer experience, the absence of a seamless data integration and data management strategy was adversely affecting time to market and draining valuable human resources,” says Bob Cournoyer, senior director of data strategy, BI and analytics at Estes Express Lines.

Data woes impact business success

With shipping concerns coming under greater scrutiny, Estes Express Lines customers are increasingly interested in up-to-the-minute details about their shipments, such as expected charges, delivery time, and whether their goods are damaged or not. While the company had a data warehouse, it was primarily used for analysis. As it was batch updated every 24 hours, it didn’t work in real-time.

“Since the data was living everywhere — in the cloud, on prem, in multiple databases throughout the organization and even on desktops at some point — we were unable to fulfil the needs of our customers. It was frustrating for both the customers and those serving them,” says Cournoyer.

Pulling data from multiple sources and then sharing it in a common way was also taking a toll on the company’s IT department. “Our cloud-based systems are very specific and disparate in nature. For instance, we had Salesforce CRM to manage our customers and Oracle ERP for our back-office functions. A lot of times data from all the different systems needed to be combined into one, which was a tedious process. Users couldn’t self-serve themselves and we had to assign a resource to them to satisfy that need,” says Cournoyer.  

Under the old system, IT would have to write ITIL processes to source the requested data, which would then be moved to another database to be accessible to the business user, as opposed to giving a direct connection to the actual data source. “Every time somebody made a new request for a new piece of information, we had to touch the code and go through the entire testing lifecycle. It was frustrating for the business, to say the least,” Cournoyer says. “At one point, I had 15 people on my data team and seven of them were engaged only in data analysis.”

Those data bottlenecks also led to delayed time to market. “Whenever we needed to deliver a solution that was going to add value to the business, we had to build in all the extra time needed to source data and do data analysis, potentially write code. Depending upon the complexity, this could add six to eight weeks to a project,” he says.

In addition to these challenges that urgently warranted a data management platform, Estes also had a mission to reduce technical debt. As Cournoyer says, “We didn’t want to keep digging the hole deeper. Copying and moving data has its own costs associated with it and we wanted to do away with it.”

Future-proofing Estes Express’ data strategy

Considering these challenges, Cournoyer set about developing a data strategy aimed at making data available to internal business users and IT systems in real-time without creating any technical debt.

“To start with, the entire IT department was reorganized. The data team was decoupled, and all the data analysts were formed into agile teams so that they could support whatever the data needs would be. We then started our exploration for a platform to solve the data problem,” Cournoyer says. 

Estes Express Lines evaluated all the big players, including IBM, before deciding to leverage Denodo’s logical data fabric to access all its enterprise data and have it available in one central location.

“Before deploying the solution, we decided to do a six-week proof of concept. We picked a couple of key areas of our data that were the most requested in the company and virtualized them, which formed about 10% of our entire data universe. We built and delivered some APIs on top of it within the six-week timeframe, and we did it with the internal team that had never seen the system before. That’s how easy it was to learn and use the new solution,” he says.

At the end of the six weeks, Cournoyer and his team “were able to approve two or three key concepts back to the business,” and the proof-of-concept work was rolled over to the next project. “During this time, we were able to map over 50% of all our data and started to use some of the more advanced features of the product. Now, a year and a half later, we’re well versed in it,” he says, adding that the freight transportation provider now has “well over 90% of the data in the organization completely mapped.”

While Estes Express chose an on-prem implementation because it still has a large presence of operational data on premises, the data fabric covers all the company’s internal and cloud-based data sources, delivering real-time data consistency by establishing a single source of truth.

Ramping up CX, slashing time to market

With the logical data fabric in place, powered by data virtualization, Estes Express is now able to manage, integrate, and deliver data to any user, in real-time, regardless of the location and format of the source data.

“Our customer care representatives now have information at their fingertips and no longer fumble or search for it. This ability to deliver value back to our customers and to our internal customers as well has been huge. Unprecedented insight into where shipments are and how they are moving through systems provide an optimal customer experience,” Cournoyer says.

“We measure the sentiments of our customers through a third-party company. They have come back and told us our numbers have gone up. Besides, we can analyze customer scores and perform sentiment analysis to adjust offerings to better the customer experience,” he says.

The new data strategy has also reduced the time to market. “It used to take us weeks, and months in some cases, to deliver solutions. We can now do it in days and even in hours. Reduction in time to market helped us deliver data faster to applications business users and has also reduced our labor cost by 10%,” he says. By enabling centralized, consistent data to all projects, post deployments issues have also come down, saving the company time and resources.

The IT department no longer needs to move and store data, which has reduced the company’s technical debt by cutting down the number of SQL databases, lowering license and storage costs.

The new strategy has also helped Estes Express bring API development back in-house. “We were paying a third-party company to build APIs for us. It used to take us six to eight weeks to get an API but if the requirements changed in the middle of that cycle, they had to go back and reset. With this new data platform, we built a couple of APIs in two hours. I don’t know how to put a number on that but our reliance on third parties to build APIs has gone way down, which has been a huge cost savings for us,” Cournoyer says, adding that the data fabric–based strategy has also laid the foundation for the company’s new data governance program.

Data Governance, Data Management

Technologies like the Internet of Things (IoT), artificial intelligence (AI), and advanced analytics provide tremendous opportunities to increase efficiency, safety, and sustainability. However, for businesses with operations in remote locations, the lack of public infrastructure, including cloud connectivity, often places these digital innovations out of reach.

Until recently, this has been the predicament of oil and gas companies operating oil wells, pipelines, and offshore rigs in remote, hard-to-reach locales. But the arrival of private 5G for oil and gas has changed this. Here’s how private 5G is transforming oil and gas operations in the field.

Secure bandwidth & real-time monitoring in remote locales

5G is a hardened telco network environment that provides one of the most secure networks in the world. Using this same technology, private 5G delivers an ultra-secure, restricted-access mobile network that gives businesses reliable connectivity and bandwidth to support their data transmission needs.

Private 5G enables a transportable “network-in-a-box” solution that can be relocated to provide connectivity and bandwidth in remote locations. This self-contained network offers the low-latency connectivity needed to configure, provision, and monitor a network. Furthermore, private 5G is also incredibly reliable, especially compared to traditional Wi-Fi, enabling superior communications and bandwidth-intensive, edge-to-cloud data transmission.

Increased productivity and efficiency

This highly reliable network solution is transforming oil and gas companies, which rely on heavy equipment with lots of moving parts, often running 24×7. By implementing intelligent IoT solutions that track vibrations, odors, and other conditions, oil and gas companies can monitor distributed, remote sites and equipment from a central location.

This is a game changer from an efficiency and productivity standpoint. For example, private 5G accelerates time to production for remote locations by eliminating the cost and time associated with coordinating with telco to build infrastructure. Additionally, private 5G helps oil and gas companies keep sites running smoothly, combining IoT solutions with AI and machine learning to enable predictive maintenance. This reduces costly equipment breakdowns and repairs, minimizes operational disruptions, and extends the life of hardware.

Furthermore, private 5G enables operators to diagnose and upgrade firmware and machinery and perform maintenance remotely. This decreases the need for travel and the number of crews in the field and reduces equipment downtime.

Private 5G enables improved safety and sustainability

Private 5G supports advanced solutions that boost workplace safety. Oil and gas companies can apply intelligent edge solutions to monitor for security breaches and safety hazards. IoT sensors can detect gas and equipment leaks, temperature fluctuations, and vibrations to avoid catastrophic events and keep employees safe.

From a sustainability standpoint, private 5G enables solutions that help prevent oil and gas leaks, reducing environmental impacts. Furthermore, oil and gas companies can implement smart solutions that minimize energy and resource usage and reduce emissions in the field.

Unlock the potential of private 5G

Private 5G is transforming oil and gas operations as well as businesses in other industries with remote, hard-to-reach operations. As an award-winning, international IT solutions provider and network integrator, GDT can help your organization design and implement an HPE private 5G solution to meet your specific needs.

HPE brings together cellular and Wi-Fi for private networking across multiple edge-to-cloud use cases. HPE’s private 5G solution is based on HPE 5G Core Stack, an open, cloud-native, container-based 5G core network solution.

To discover how private 5G can transform your business, contact the experts at GDT for a free consultation.

5G

The air travel industry has dealt with significant change and uncertainty in the wake of the COVID-19 pandemic. In 2020, JetBlue Airways decided its competitive advantage depended on IT — in particular, on transforming its data stack to consolidate data operations, operationalize customer feedback, reduce downstream effects of weather and delays, and ensure aircraft safety.

“Back in 2020, the data team at JetBlue began a multi-year transformation of the company’s data stack,” says Ashley Van Name, general manager of data engineering at JetBlue. “The goal was to enable access to more data in near real-time, ensure that data from all critical systems was integrated in one place, and to remove any compute and storage limitations that prevented crewmembers from building advanced analytical products in the past.”

Prior to this effort, JetBlue’s data operations were centered on an on-premises data warehouse that stored information for a handful of key systems. The data was updated on a daily or hourly basis depending on the data set, but that still caused data latency issues.

“This was severely limiting,” Van Name says. “It meant that crewmembers could not build self-service reporting products using real-time data. All operational reporting needed to be built on top of the operational data storage layer, which was highly protected and limited in the amount of compute that could be allocated for reporting purposes.”

Data availability and query performance were also issues. The on-premises data warehouse was a physical system with a pre-provisioned amount of storage and compute, meaning that queries were constantly competing with data storage for resources.

“Given that we couldn’t stop analysts from querying the data they needed, we weren’t able to integrate as many additional data sets as we may have wanted in the warehouse — effectively, in our case, the ‘compute’ requirement won out over storage,” Van Name says.

The system was also limited to running 32 concurrent queries at any one time, which created a queue of queries on a daily basis, contributing to longer query run-times.

The answer? The Long Island City, N.Y.-based airlines decided to look to the cloud.

Near real-time data engine

JetBlue partnered with data cloud specialist Snowflake to transform its data stack, first by moving the company’s data from its legacy on-premises system to the Snowflake data cloud, which Van Name says greatly alleviated many of the company’s most immediate issues.

Ashley Van Name, general manager of data engineering, JetBlue

JetBlue

Jet Blue’s data team then focused on integrating critical data sets that analysts had not previously been able to access in the on-premises system. The team made more than 50 feeds of near real-time data available to analysts, spanning the airline’s flight movement system, crew tracking system, reservations systems, notification managers, check-in-systems, and more. Data from those feeds is available in Snowflake within a minute of being received from source systems.

“We effectively grew our data offerings in Snowflake to greater than 500% of what was available in the on-premise warehouse,” Van Name says.

JetBlue’s data transformation journey is just beginning. Van Name says moving the data into the cloud is just one piece of the puzzle: The next challenge is ensuring that analysts have an easy way to interact with the data available in the platform.

“So far, we have done a lot of work to clean, organize, and standardize our data offerings, but there is still progress to be made,” she says. “We firmly believe that once data is integrated and cleaned, the data team’s focus needs to shift to data curation.”

Data curation is critical to ensuring analysts of all levels can interact with the company’s data, Van Name says, adding that building single, easy-to-use “fact” tables that can answer common questions about a data set will remove the barrier to entry that JetBlue has traditionally seen when new analysts start interacting with data.

In addition to near real-time reporting, the data is also serving as input for machine learning models.

“In addition to data curation, we have begun to accelerate our internal data science initiatives,” says Sai Pradhan Ravuru, general manager of data science and analytics at JetBlue. “Over the past year and a half, a new data science team has been stood up and has been working with the data in Snowflake to build machine learning algorithms that provide predictions about the state of our operations, and also enable us to learn more about our customers and their preferences.”

Ravuru says the data science team is currently working on a large-scale AI product to orchestrate efficiencies at JetBlue.

“The product is powered by second-degree curated data models built in close collaboration between the data engineering and data science teams to refresh the feature stores used in ML products,” Ravuru says. “Several offshoot ecosystems of ML products form the basis of a long-term strategy to fuel each team at JetBlue with predictive insights.”

Navigating change

JetBlue shifted to Snowflake nearly two years ago. Van Name says that over the past year, internal adoption of the platform has increased by almost 75%, as measured by monthly active users. There has also been a greater than 20% increase in the number of self-service reports developed by users.

Sai Pradhan Ravuru, general manager of data science and analytics, JetBlue

JetBlue

Ravuru says his team has deployed two machine learning models to production, relating to dynamic pricing and customer personalization. Rapid prototyping and iteration are giving the team the ability to operationalize data models and ML products faster with each deployment.

“In addition, curated data models built agnostic of query latencies (i.e., queries per second) offer a flexible online feature store solution for the ML APIs developed by data scientists and AI and ML engineers,” Ravuru says. “Depending on the needs, the data is therefore served up in milliseconds or batches to strategically utilize the real-time streaming pipelines.”

While every company has its own unique challenges, Van Name believes adopting a data-focused mindset is a primary building block for supporting larger-scale change. It is especially important to ensure that leadership understands the current challenges and the technology options in the marketplace that can help alleviate those challenges, she says.

“Sometimes, it is challenging to have insight to all of the data problems that exist within a large organization,” Van Name says. “At JetBlue, we survey our data users on a yearly basis to get their feedback on an official forum. We use those responses to shape our strategy, and to get a better understanding of where we’re doing well and where we have opportunities for improvement. Receiving feedback is easy; putting it to action is where real change can be made.”

Van Name also notes that direct partnership with data-focused leaders throughout the organization is essential.

“Your data stack is only as good as the value that it brings to users,” she says. “As a technical data leader, you can take time to curate the best, most complete, and accurate set of information for your organization, but if no one is using it to make decisions or stay informed, it’s practically worthless. Building relationships with leaders of teams who can make use of the data will help to realize its full value.”

Analytics, Cloud Computing, Data Management

In 2016, Major League Baseball’s Texas Rangers announced it would build a brand-new state-of-the-art stadium in Arlington, Texas. It wasn’t just a new venue for the team, it was an opportunity to reimagine business operations.

The old stadium, which opened in 1992, provided the business operations team with data, but that data came from disparate sources, many of which were not consistently updated. The new Globe Life Field not only boasts a retractable roof, but it produces data in categories that didn’t even exist in 1992. With the new stadium on the horizon, the team needed to update existing IT systems and manual business and IT processes to handle the massive volumes of new data that would soon be at their fingertips.

“In the old stadium, we just didn’t have the ability to get the data that we needed,” says Machelle Noel, manager of analytic systems at the Texas Rangers Baseball Club. “Some of our systems were old. We just didn’t have the ability that we now have in this new, state-of-the-art facility.”

The new stadium, which opened in 2020, was a chance to develop a robust and scalable data and analytics environment that could provide a foundation for growth with scalable systems, real-time access to data, and a single source of truth, all while automating time-consuming manual processes.

“We knew we were going to have tons of new data sources,” Noel says. “Now what are we going to do with those? How are we going to get them? Where are we going to store them? How are we going to link them together? Moving into this new building really catapulted us into a whole new world.”

Driving better fan experiences with data

Noel had already established a relationship with consulting firm Resultant through a smaller data visualization project. She decided to bring Resultant in to assist, starting with the firm’s strategic data assessment (SDA) framework, which evaluates a client’s data challenges in terms of people and processes, data models and structures, data architecture and platforms, visual analytics and reporting, and advanced analytics. Resultant then provided the business operations team with a set of recommendations for going forward, which the Rangers implemented with the consulting firm’s help.

Noel notes that her team is small, so the consultancy helped by providing specific expertise in certain areas, like Alteryx, which is the platform the team uses for ETL.

Resultant recommended a new, on-prem data infrastructure, complete with data lakes to provide stake holders with a better way to manage data reliability, accuracy, and timeliness. The process included co-developing a comprehensive roadmap, project plan, and budget with the business operations team.

“At the old stadium, you’d pull up at the park and you’d give somebody your $20 to park and they would put that $20 in their fanny pack,” says Brian Vinson, client success leader and principal consultant at Resultant. “Then you’d get to the gate and show them your paper ticket. They would let you in and then you would go to your seat, then maybe you’d go buy some concessions. You’d scan your credit card to get your concessions or your hat, or pay cash, and the team wouldn’t see that report until the next day or the next week.”

In those days, when a game ended, it was time for business operations to get to work pulling data and preparing reports, which often took hours. 

Resultant helped the Rangers automate that task, automatically generating that report within an hour of a game’s completion. The new environment also generates near real-time updates that can be shared with executives during a game. This allows the operations team to determine which stadium entrances are the busiest at any given time to allow them to better distribute staff, promotion items, and concession resources. Departments can see what the top-selling shirts (and sizes) are at any given time, how many paper towels are left in any given restroom, even how many hot dogs are sold per minute.

“With digital ticketing and digital parking passes, we know who those people are, and we can follow the lifecycle of someone from when they come into the lot and which gate they came in,” Noel says. “We can see how full different sections get at what point in time.”

The team can also use the data to enhance the fan experience. A system the Rangers call ‘24/7’ logs all incidents that occur during an event — everything from spill clean-up to emptying the trash, replacing a lightbulb, to medical assistance. This system helped the operations team notice that there was a problem with broken seats in the stadium and approach their vendor with the data.

“We were able to take the data from that system and determine that we actually had a quality control problem with a lot of our new seats,” Noel says.  “We were able to proactively replace all the seats that were potentially in that batch. That enhances the fan experience because they’re not coming into a broken seat.”

Lessons learned

Noel and Vinson agree that one of the biggest lessons learned from the process is that it’s important to share successes and educate stakeholders about the art of the possible.

“The idea that ‘if you build it, they will come,’ does not always work, because you can build stuff and people don’t know about it,” Vinson says. “In the strategic data assessment, when people were like, ‘Oh, you can show us the ice cream sales?’ Yeah. I think you have to toot your own horn that, yes, we have this information available.”

When the business operations team first presented the new end-of-game report in an executive meeting, the owners asked to be included. Now, Noel says, they want it for every game, every event, every concert.

“Now, when we do a rodeo and it doesn’t come out when they expect it, they’re like, ‘Okay, where are my numbers?’ They want that information,” she says.

Analytics, Data Management

The education sector in the UK is seeing incredible transformation with the expansion of multi-academy trusts (MATs) and the government’s requirement to have all schools in MATs by 2030. This brings unprecedented challenges, but also an enormous opportunity for positive education reform.

Core to this challenge for MAT’s is the management of financial operations, budgets and funding across large numbers of schools Their ability to grow has been impeded by legacy accounting solutions, making it an expensive and lengthy process to setup, onboard and report on new schools as they are brought into the trust.

Sage, an Amazon Web Services (AWS) partner, is a world leader in financial technology. Sage Intacct is next-generation accounting software, that enables the transformation and scaling of financial operations, which MATs will need to perform.

Trusts need to consider four key topics when transforming their complex accounting and reporting operations -scale and expansion, automation, integration and reporting.

Trusts need to grow, scale and expand.  Having a system that can support the simple and fast addition of new schools, or other entities, is critical to successful expansion. Modern systems like Sage Intacct allow this to be done in minutes, removing the need for expensive setup costs and waiting on consultants to deliver.

Next up is automation, probably the greatest tool in your arsenal to mitigate the time and cost of financial operations. Leveraging automation, Sage Intacct can help reduce the day-to-day admin of your finance team, alleviating manual jobs and using technology such as optical character recognition (OCR) to accurately read and import financial documents.

The arrival of cloud accounting opened the gate to integrated systems and harmonising of processes and data. External applications such as forecasting tools, accounts payable (AP) processing and approval management all allow for huge savings in time and offer improved technology solutions. It’s possible to integrate bank accounts and have daily transaction feeds, saving your team yet another job of importing and matching bank data.

Finally, powerful, fast and accurate reporting is the pièce de resistance of your new accounting platform. Out of-the-box education and Skills Funding Agency and Department of Education reports ensure you have all the right data required for government reporting. And multi-entity consolidation allows you to have complete oversight of the trust’s financials.

Sage Intacct is built to support growing Trusts who need to minimise complexity and maximise impact.

Trusts who want to reduce the cost of financial operations and unlock the challenges of scale and growth need to start by reviewing their accounting infrastructure with expert help. Sage works closely with their education partner, ION, to help deliver the smartest and most intelligent financial solutions to schools. ION has delivered Sage Intacct for education to multiple MATs, implementing the game-changing software, training staff on how to use it, and providing world class support.  This partnership sets the trust up for success allowing the focus to be on education and growth, not losing time managing finances.

To find out more about the benefits of cloud accounting software for multi-academy trusts, click here.

Education Industry, Financial Services Industry

The end of the Great Resignation — the latest buzzword referring to a record number of people quitting their jobs since the pandemic — seems to be nowhere in sight.

“New employee expectations, and the availability of hybrid arrangements, will continue to fuel the rise in attrition. An individual organization with a turnover rate of 20% before the pandemic could face a turnover rate as high as 24% in 2022 and the years to come,” says Piers Hudson, senior director in the Gartner HR practice.

The Global Workforce Hopes and Fears Survey, conducted by PwC, predicts that one in five workers worldwide may quit their jobs in 2022 with 71% of respondents citing salary as the major driver for changing jobs.

The challenge for IT leaders is clear: With employees quitting faster than they can be replaced, the rush to hire the right talent is on — so too is the need to retain existing IT talent.

But for Kapil Mehrotra, group CTO at National Collateral Management Services (NCMS), high turnover presented an opportunity to cut costs of the IT department, streamline its operations, and find a long-term solution to the perpetual skills scarcity problem.

Here’s how Mehrotra transformed the Great Resignation into a new approach for staffing and skilling up the commodity-based service provider’s IT department.

Losing 40% of domain expertise in one month

From an IT infrastructure standpoint, NCMS is 100% on the cloud. The company’s IT department comprised 27 employees, with one person each handling business analytics and cybersecurity, and the rest of the team split between handling infrastructure and applications. The applications had been transformed into SaaS and PaaS environment.

With a scarcity for experienced and skilled resources in the market and companies willing to poach developers to fulfill their needs, it was just a matter of time before NCMS too saw a churn in its IT department.

“In March, 10 of the 27 employees from the IT department resigned when they received job offers with substantial hikes. At that time, application migration was under way, and our supply chain software was also getting a major upgrade. The sudden and substantial drop of 40% in the department’s strength made a significant impact on several such high-priority projects,” says Mehrotra.

“Those who left included an Android expert and specialists in the fields of .Net and IT infrastructure. As the company had legacy systems, it became tough to hire resources that could manage them. Nobody wanted to deal with legacy solutions. The potential candidates would convey their inability to work on such systems by showing their certifications on newer versions of the solutions,” he says.

Besides, whatever few skilled resources available for hire were expecting exorbitant salaries. “This would have not only impacted our budget but would have also created an imbalance in the IT department. HR wanted to maintain the equilibrium that would have otherwise got disturbed had we hired someone at very high salary compared to existing team members who had been in the company for years,” says Mehrotra.

Nurturing fresh talent in-house

So, while most technology leaders were scouting for experienced and skilled resources, Mehrotra decided to hire fresh talent straight from nearby universities. Immediately after the employees quit, he went to engineering colleges in Gurgaon and shortlisted 20 to 25 CVs. Mehrotra eventually hired four candidates, taking the depleted IT department’s head count to 21.

But Mehrotra now had two challenges at hand: He had to train the freshers and kickstart the pending high-priority projects as soon as possible.

“I told the business that we wouldn’t be able to take any new requirements from them for the next three months. This gave us the time to groom the freshers. We then got into a task-based contract with the outgoing team members. As per the contract, the team members who had exited were to complete the high-priority projects over the next months at a fixed monthly payout. If the project spilled over to the next month, there would be no additional payout,” Mehrotra says.

“Adopting this approach not only enabled completion of the projects hanging in the limbo, but also provided the freshers with practical and hands-on training. They ex-employees acted as mentors for the freshers who were asked to write code and do research. All this helped the new employees in getting a grip on the company’s infrastructure,” he says.

In addition, Mehrotra also got the freshers certified. “One got certified on .Net while another on Azure DevOps,” says Mehrotra.

New recruits help slash costs, streamline operations

The strategy of bringing first-time IT workers onboard has helped Mehrotra in slashing salary costs by 30%. “The new hires have come at a lower salary and have helped us in streamlining the operations. We are getting 21 people to do the work that was earlier done by 27 people. The old employees used to work in a leisurely manner. They used to enter office late, open their laptops at 11 a.m., and take regular breaks during working hours. The commitment levels of freshers are higher, and they stay in a company for an average of three years,” says Mehrotra.

After three months of working with the mentors, the freshers came up to speed. “We started taking requirements from business. The only difference working with freshers is that as an IT leader, I have stepped up and taken more responsibility. I make sure that I participate even in normal meetings to avoid any conflicts. Earlier what got completed in one day is currently taking seven days to complete. Therefore, we take timelines accordingly. We are currently working at 70% of our productivity and expect to return to 100% in the next three months,” says Mehrotra.

Sharing his learnings with other IT leaders, he says, “There will always be a skills scarcity in the market, but the time has come to break this chain. Hiring resources at ever- increasing salaries is not a sustainable solution. The answer lies in leveraging freshers. Just like big software companies, CIOs also must hire, train, and retain freshers. We must nurture good resources inhouse to bridge the skills gap.”  Mehrotra is now back to hiring and has approached recruitment consultants with the mandate to fill 11 positions, which are open to all, including candidates with even six months to a years’ experience.

IT Skills

Good cyber hygiene helps the security team reduce risk. So it’s not surprising that the line between IT operations and security is increasingly blurred. Let’s take a closer look.

One of the core principles in IT operations is “you can’t manage what you don’t know you have.” By extension, you also can’t secure what you don’t know you have. That’s why visibility is important to IT operations and security. Another important aspect is dependency mapping. Dependency mapping is part of visibility, showing the relationships between your servers and the applications or services they host.

There are many security use cases where dependency mapping comes into play. For example, if there’s a breach, dependency mapping offers visibility into what’s affected. If a server is compromised, what is it talking to? If it must be taken offline, what applications will break?

To further erase the line between IT operations and security, many operations tools have a security dimension as well.

What is good cyber hygiene?

Good cyber hygiene is knowing what you have and controlling it. Do you have the licenses you need for your software? Are you out of compliance and at risk for penalties? Are you paying for licenses you’re not using? Are your endpoints configured properly? Is there software on an endpoint that shouldn’t be there? These questions are all issues of hygiene, and they can only be answered with visibility and control. 

To assess your cyber hygiene, ask yourself:

What do you have?Is it managed?Do managed endpoints meet the criteria set for a healthy endpoint?

Think of endpoints in three categories: managed, unmanaged and unmanageable. Not all endpoints are computers or servers. That’s why good cyber hygiene requires tools that can identify and manage devices like cell phones, printers and machines on a factory floor.

There is no single tool that can identify and manage every type of endpoint. But the more visibility you have, the better your cyber hygiene. And the better your risk posture.

Work-from-home (WFH) made visibility much harder. If endpoints aren’t always on the network, how do you measure them? Many network tools weren’t built for that. But once you know what devices you have, where they are and what’s on them, you can enforce policies that ensure these devices behave as they should.

You also want the ability to patch and update software quickly. When Patch Tuesday comes around, can you get critical patches on all your devices in a reasonable time frame? Will you know in real time what’s been patched and what wasn’t? It’s about visibility.

That way, when security comes to operations and says, “There’s a zero-day flaw in Microsoft Word. How many of your endpoints have this version?” Operations can answer that question. They can say, “We know about that, and we’ve already patched it.” That’s the power of visibility and cyber hygiene.

Good hygiene delivers fresh data for IT analytics

Good hygiene is critical for fresh, accurate data. But in terms of executive hierarchy, where does the push for good cyber hygiene start? Outside of IT and security, most executives probably don’t think about cyber hygiene. They think about getting answers to questions that rely on good IT hygiene.

For example, if CFOs have a financial or legal issue around license compliance, they probably assume the IT ops team can quickly provide answers. Those executives aren’t thinking about hygiene. They’re thinking about getting reliable answers quickly.

What C-level executives need are executive dashboards that can tell them whether their top 10 business services are healthy. The data the dashboards display will vary depending on the executive and business the organization is in.

CIOs may want to know how many Windows 10 licenses they’re paying for. The CFO wants to know if the customer billing service is operating. The CMO needs to know if its customer website is running properly. The CISO wants to know about patch levels. This diverse group of performance issues all depends on fresh data for accuracy.

Fresh data can bring the most critical issues to the dashboard, so management doesn’t have to constantly pepper IT with questions. All this starts with good cyber hygiene.

Analytics supports alerting and baselining

When an issue arises, like a critical machine’s CPU use is off the charts, an automated alert takes the burden off IT to continuously search for problems. This capability is important for anyone managing an environment at scale; don’t make IT search for issues.

Baselining goes hand-in-hand with alerting because alerts must have set thresholds. Organizations often need guidance on how to set thresholds. There are several ways to do it and no right way.

One approach is automatic baselining. If an organization thinks its environment is relatively healthy, the current state is the baseline. So it sets up alerts to notify IT when something varies from that.

Analytics can play an important role here by helping organizations determine whether normal is the same as healthy. Your tools should tell you what a healthy endpoint looks like and that’s the baseline. Alerts tell you when something happens that changes that baseline state.

Analytics helps operations and security master the basics

Visibility and control are the basics of cyber hygiene. Start with those. Know what’s in your environment and what’s running on those assets—not a month ago—right now. If your tools can’t provide that information, you need tools that can. You may have great hygiene on 50 percent of the machines you know about, but that won’t get the job done. Fresh data from every endpoint in the environment: that’s what delivers visibility and control.

Need help with cyber hygiene? Here’s a complete guide to get you started.

Analytics