IT teams are exhausted. The tech talent shortage has led to severe understaffing even as cybercriminals ramp up their attacks. The ever-increasing shift toward hybrid working models has only compounded the issue, with IT teams struggling to deploy patches and other fixes across an expanded attack surface transcending the corporate firewall. Nearly three-quarters (74%) of CIOs say remote and hybrid work have increased the stress on their IT staff.[1]  

The numbers tell the tale: 

Attackers typically begin exploiting a new vulnerability less than 15 days after discovery.[2] Organizations take 60 days, on average, to remediate critical vulnerabilities.[3] Six out of 10 breaches occur because a patch was available for a known vulnerability but not applied.[4] 

Organizations are essentially providing cybercriminals with open access to their network for two months. IT teams simply cannot afford to leave known vulnerabilities unpatched for so long, but how can they address the situation without hiring new talent? Simply put: They must make the best use of available resources by accurately identifying, assessing, and addressing their vulnerabilities.  

A well-designed vulnerability remediation platform – the equivalent of fundamental security hygiene – can significantly reduce IT stress while strengthening an organization’s security posture. First, these platforms can enable IT security and operations teams to rapidly reconcile vulnerability detection with remediation actions, so no one is ever confused about the proper course of action. Then, they can rank vulnerabilities by severity and automatically create prioritized remediation workflows.  

Advanced patch analytics can be embedded into these workflows, which reduces the need for specialized expertise and takes the pressure off IT teams while reducing errors and minimizing costs. Finally, a strong platform will leverage a broad set of remediation capabilities with out-of-the-box, certified remediations across multiple operating systems. 

HCL BigFix is a robust vulnerability remediation solution that enables IT teams to efficiently find and deploy the right patch for each vulnerability for maximum protection against advanced persistent threats. It closes the communications gap between security and operations while eliminating much of the manual work and spreadsheet complexity that causes so many delays in remediation.  

As a result, IT can reduce patch times from days or weeks to hours or minutes. BigFix automatically correlates discovered vulnerabilities with the right patch and configurations for a broad range of OS platforms with certified remediations that can be applied on demand.  

To learn more about how BigFix can reduce the pressure on your IT teams and substantially mitigate the risk of unpatched vulnerabilities, visit https://www.hcltechsw.com/bigfix/ 

[1] IDG Communications. 2022 State of the CIO: Rebalancing Act: CIOs Operationalize Pandemic-Era Innovation. 2022. https://f.hubspotusercontent40.net/hubfs/1624046/IDGEXSumm2022_Final.pdf. Retrieved 14 February 2023. 

[2] CISA. Remediate Vulnerabilities for Internet-Accessible Systems. January 2019. https://www.cisa.gov/sites/default/files/publications/CISAInsights-Cyber-RemediateVulnerabilitiesforInternetAccessibleSystems_S508C.pdf. Retrieved 14 February 2023. 

[3] Edgescan. Organizations Take an Average of 60 Days to Patch Critical Risk Vulnerabilities. 7 March 2022. https://www.prnewswire.com/news-releases/organizations-take-an-average-of-60-days-to-patch-critical-risk-vulnerabilities-301496256.html. Retrieved 14 February 2023. 

[4] O’Driscoll, Aimee. Cyber security vulnerability statistics and facts of 2022. Comparitech. 13 December 2022. https://www.comparitech.com/blog/information-security/cybersecurity-vulnerability-statistics/. Retrieved 14 February 2023. 

Data and Information Security

Have you ever experienced that all-encompassing, consuming passion when you discover something you’re good at? Call it your “sweet spot,” “firing on all cylinders,” or being “in the zone” – when you discover your passion, hours pass as minutes.

Victor Goossens first felt it – that passion – playing video games in high school. “I played day and night, I wanted to do nothing else, until I was one of the better players in the world.” 

Goossens soon launched his esports career and founded Team Liquid.

Professional esports players on Team Liquid rely on SAP HANA Cloud to instantly sift through enormous amounts of game-generated data and deliver personalized analytics during fast-paced tournaments.

Like all team sports, building the right roster is paramount to winning trophies and keeping happy fans. Esport players must possess the traits of elite athletes: the speed of an Olympic sprinter, the endurance of a 100-mile ultrarunner, the team precision of synchronized swimmers. Just like their more traditional counterparts, esports athletes are also coached extensively, hone their refined motor skills constantly, and even eat special diets.

Yet, these days Goossens questions if passion alone is enough. “At what point has the magic ran out? How long can top players stay at the top, both mentally and physically? If our players are performing at 97%, they will not win their match. And that is what top sports and top competition is all about. It’s about the 1% to 2%.”

In attempts to find those 1-2% improvements, and possibly achieve GOAT status, Team Liquid turned to SAP technology and built a parallel team of passionate, tech savvy experts. “Together with our coaches,” explains Goossens, “we built infrastructure that includes data analysts and managers who are using SAP software built especially for multi-player games.”

Assistant Coach and Data Analyst Mathis “Jabbz” Friesel was among those new hires. When he joined Team Liquid a couple of years ago, he first learned how esports athletes think, how they react, how they incorporate their passion into their game play. “Players usually like to play based on how they feel – what feels best.”   

Next, Friesel investigated like a detective to pinpoint that extra 1-2% player improvement. He identified the specific type of data gamers need and especially how to make it relevant in the moment. Instead of studying replays for hours, Friesel says analytics technology quickly sifts through enormous amounts of game-generated data. This time savings is especially valuable during fast-paced tournaments.

“Our analytics tool is based on SAP HANA Cloud,” says Friesel. “It’s very customized for our roster. With just one button, suddenly everything’s there for me. We personalize the statistics and info from the game in a way where the players can actually digest it very easily. They get a lot of information in a really short amount of time, and you’re immediately ready for the next match.”

The technology also helps Team Liquid players improve, by getting inside their opponents’ heads. “Thanks to the analytics tool, I do not have to choose which games are going to be important because every single official tournament match will be put into the database. I can then just filter out whichever games I need, or whichever teams I need, or even players.” Friesel credits technology for helping him see ways to exploit opponents’ weaknesses, by identifying things like enemies’ patterns and mapping during gameplay.

Analytics technology is a gamechanger, says Friesel, because it makes sifting through enormous amounts of gaming data simpler. “Our ambition is always going to be winning,” he says.  “I can definitely say, thanks to SAP, it will be a lot easier.” 

Team Liquid is a 2022 SAP Innovation Awards winner. You can read their awards pitch deck here.

Data Management

For enterprises looking to wrest the most value from their data, especially in real-time, the “data lakehouse” concept is starting to catch on.

The idea behind the data lakehouse is to merge together the best of what data lakes and data warehouses have to offer, says Gartner analyst Adam Ronthal.

Data warehouses, for their part, enable companies to store large amounts of structured data with well-defined schemas. They are designed to support a large number of simultaneous queries and to deliver the results quickly to many simultaneous users.

Data lakes, on the other hand, enable companies to collect raw, unstructured data in many formats for data analysts to hunt through. These vast pools of data have grown in prominence of late thanks to the flexibility they provide enterprises to store vast streams of data without first having to define the purpose of doing so.  

The market for these two types of big data repositories is “converging in the middle, at the lakehouse concept,” Ronthal says, with established data warehouse vendors adding the ability to manage unstructured data, and data lake vendors adding structure to their offerings.

For example, on AWS, enterprises can now pair Amazon Redshift, a data warehouse, with Amazon Redshift Spectrum, which enables Redshift to reach into Amazon’s unstructured S3 data lakes. Meanwhile, data lake Snowflake can now support unstructured data with external tables, Ronthal says.

When companies have separate lakes and warehouses, and data needs to move from one to the other, it introduces latency and costs time and money, Ronthal adds. Combining the two in one platform reduces effort and data movement, thereby accelerating the pace of uncovering data insights.

And, depending on the platform, a data lakehouse can also offer other features, such as support for data streaming, machine learning, and collaboration, giving enterprises additional tools for making the most of their data.

Here is a look at at the benefits of data lakehouses and how several leading organizations are making good on their promise as part of their analytics strategies.

Enhancing the video game experience

Sega Europe’s use of data repositories in support of its video games has evolved considerably in the past several years.

In 2016, the company began using the Amazon Redshift data warehouse to collect event data from its Football Manager video game. At first this event data consisted simply of players opening and closing games. The company had two staff members looking into this data, which streamed into Redshift at a rate of ten events per second.

“But there was so much more data we could be collecting,” says Felix Baker, the company’s head of data services. “Like what teams people were managing, or how much money they were spending.”

By 2017, Sega Europe was collecting 800 events a second, with five staff working on the platform. By 2020, the company’s system was capturing 7,000 events per second from a portfolio of 30 Sega games, with 25 staff involved.

At that point, the system was starting to hit its limits, Baker says. Because of the data structures needed for inclusion in the data warehouse, data was coming in batches and it took half an hour to an hour to analyze it, he says.

“We wanted to analyze the data in real-time,” he adds, but this functionality wasn’t available in Redshift at the time.

After performing proofs of concept with three platforms — Redshift, Snowflake, and Databricks — Sega Europe settled on using Databricks, one of the pioneers of the data lakehouse industry.

“Databricks offered an out-of-the-box managed services solution that did what we needed without us having to develop anything,” he says. That included not just real-time streaming but machine learning and collaborative workspaces.

In addition, the data lakehouse architecture enabled Sega Europe to ingest unstructured data, such as social media feeds, as well.

“With Redshift, we had to concentrate on schema design,” Baker says. “Every table had to have a set structure before we could start ingesting data. That made it clunky in many ways. With the data lakehouse, it’s been easier.”

Sega Europe’s Databricks platform went live into production in the summer of 2020. Two or three consultants from Databricks worked alongside six or seven people from Sega Europe to get the streaming solution up and running, matching what the company had in place previously with Redshift. The new lakehouse is built in three layers, the base layer of which is just one large table that everything gets dumped into.

“If developers create new events, they don’t have to tell us to expect new fields — they can literally send us everything,” Baker says. “And we can then build jobs on top of that layer and stream out the data we acquired.”

The transition to Databricks, which is built on top of Apache Spark, was smooth for Sega Europe, thanks to prior experience with the open-source engine for large-scale data processing.

“Within our team, we had quite a bit of expertise already with Apache Spark,” Baker says. “That meant that we could set up streams very quickly based on the skills we already had.”

Today, the company processes 25,000 events per second, with more than 30 data staffers and 100 game titles in the system. Instead of taking 30 minutes to an hour to process, the data is ready within a minute.

“The volume of data collected has grown exponentially,” Baker says. In fact, after the pandemic hit, usage of some games doubled.

The new platform has also opened up new possibilities. For example, Sega Europe’s partnership with Twitch, a streaming platform where people watch other people play video games, has been enhanced to include a data stream for its Humankind game, so that viewers can get a player’s history, including the levels they completed, the battles they won, and the civilizations they conquered.

“The overlay on Twitch is updating as they play the game,” Baker says. “That is a use case that we wouldn’t have been able to achieve before Databricks.”

The company has also begun leveraging the lakehouse’s machine learning capabilities. For example, Sega Europe data scientists have designed models to figure out why players stop playing games and to make suggestions for how to increase retention.

“The speed at which these models can be built has been amazing, really,” Baker says. “They’re just cranking out these models, it seems, every couple of weeks.”

The business benefits of data lakehouses

The flexibility and catch-all nature of data lakehouses is fast proving attractive to organizations looking to capitalize on their data assets, especially as part of digital initiatives that hinge quick access to a wide array of data.

“The primary value driver is the cost efficiencies enabled by providing a source for all of an organization’s structured and unstructured data,” says Steven Karan, vice president and head of insights and data at consulting company Capgemini Canada, which has helped implement data lakehouses at leading organizations in financial services, telecom, and retail.

Moreover, data lakehouses store data in such a way that it is readily available for use by a wide array of technologies, from traditional business intelligence and reporting systems to machine learning and artificial Intelligence, Karan adds. “Other benefits include reduced data redundancy, simplified IT operations, a simplified data schema to manage, and easier to enable data governance.”

One particularly valuable use case for data lakehouses is in helping companies get value from data previously trapped in legacy or siloed systems. For example, one Capgemini enterprise customer, which had grown through acquisitions over a decade, couldn’t access valuable data related to resellers of their products.

“By migrating the siloed data from legacy data warehouses into a centralized data lakehouse, the client was able to understand at an enterprise level which of their reseller partners were most effective, and how changes such as referral programs and structures drove revenue,” he says.

Putting data into a single data lakehouse makes it easier to manage, says Meera Viswanathan, senior product manager at Fivetran, a data pipeline company. Companies that have traditionally used both data lakes and data warehouses often have separate teams to manage them, making it confusing for the business units that needed to consume the data, she says.

In addition to Databricks, Amazon Redshift Spectrum, and Snowflake, other vendors in the data lakehouse space include Microsoft, with its lakehouse platform Azure Synapse, and Google, with its BigLake on Google Cloud Platform, as well as data lakehouse platform Starburst.

Accelerating data processing for better health outcomes

One company capitalizing on these and other benefits of data lakehouses is life sciences analytics and services company IQVIA.

Before the pandemic, pharmaceutical companies running drug trials used to send employees to hospitals and other sites to collect data about things such adverse effects, says Wendy Morahan, senior director of clinical data analytics at IQVIA. “That is how they make sure the patient is safe.”

Once the pandemic hit and sites were locked down, however, pharmaceutical companies had to scramble to figure out how to get the data they needed — and to get it in a way that was compliant with regulations and fast enough to enable them to spot potential problems as quickly as possible.

Moreover, with the rise of wearable devices in healthcare, “you’re now collecting hundreds of thousands of data points,” Morahan adds.

IQVIA has been building technology to do just that for the past 20 years, says her colleague Suhas Joshi, also a senior director of clinical data analytics at the company. About four years ago, the company began using data lakehouses for this purpose, including Databricks and the data lakehouse functionality now available with Snowflake.

“With Snowflake and Databricks you have the ability to store the raw data, in any format,” Joshi says. “We get a lot of images and audio. We get all this data and use it for monitoring. In the past, it would have involved manual steps, going to different systems. It would have taken time and effort. Today, we’re able to do it all in one single platform.”

The data collection process is also faster, he says. In the past, the company would have to write code to acquire data. Now, the data can even be analyzed without having to be processed first to fit a database format.

Take the example of a patient in a drug trial who gets a lab result that shows she’s pregnant, but the pregnancy form wasn’t filled out properly, and the drug is harmful during pregnancy. Or a patient who has an adverse event and needs blood pressure medication, but the medication was not prescribed. Not catching these problems quickly can have drastic consequences. “You might be risking a patient’s safety,” says Joshi.

Analytics, Data Architecture, Data Management