Analytics have evolved dramatically over the past several years as organizations strive to unleash the power of data to benefit the business. While many organizations still struggle to get started, the most innovative organizations are using modern analytics to improve business outcomes, deliver personalized experiences, monetize data as an asset, and prepare for the unexpected.

Modern analytics is about scaling analytics capabilities with the aid of machine learning to take advantage of the mountains of data fueling today’s businesses, and delivering real-time information and insights to the people across the organization who need it. To meet the challenges and opportunities of the changing analytics landscape, technology leaders need a data strategy that addresses four critical needs:

Deliver advanced analytics and machine learning that can scale and adapt to whatever evolutions in applications and data science the future may hold.Break down internal data silos to create boundaryless innovation while enabling greater collaboration with partners outside of their own organization.Embrace the democratization of data with low-code/no-code technologies that offer the insight and power of analytics to anyone in the organization.Embed analytics into business processes to create more compelling, relevant customer experiences and insights in real time.

Building a foundation for flexible and scalable analytics

Migrating analytics from on-premises systems to the cloud opens a realm of applications and capabilities and has allowed organizations to gradually shed the restraints of legacy architecture, with the proper controls in place.

“The migration of advanced analytics to the cloud has been an iterative, evolving process,” said Deirdre Toner, Go-To-Market leader for AWS’s analytics portfolio of services. AWS doesn’t recommend that organizations try to completely re-create its on-premises environment in the cloud. “Migration works best by considering the guardrails and processes needed to collect data, store it with the appropriate security and governance models, and then accelerate innovation,” Toner said. “Don’t just lift and shift with the old design principles that caused today’s bottlenecks. This is an opportunity to modernize and break down old architectural patterns that no longer serve the business.”

The goal is a data platform that can evolve and can scale almost infinitely, using an iterative approach to maintain flexibility, with guardrails in place. “IT leaders want to avoid having to re-do the architecture every couple of years to keep pace with changing market requirements,” said Toner. “As use cases change, or if unforeseen changes in market conditions suddenly emerge – and they surely did during the pandemic – organizations need to be able to respond quickly. Being locked into a data architecture that can’t evolve isn’t acceptable.”

Aurora – a company transforming the future of transportation by building self-driving technology for trucks and other vehicles – took advantage of the scalability of cloud-based analytics in the development of its autonomous driver technology. Aurora built a cloud testing environment on AWS to better understand the safety of its technology by seeing how it would react to scenarios too dangerous or rare to simulate in the real world. With AWS, Aurora can run 5 million simulations per day, the virtual equivalent of 6 billion miles of road testing. Aurora combined its proprietary technology with many AWS database, analytics, and machine learning solutions, including Amazon EMR, Amazon DynamoDB, AWS Glue, and Amazon SageMaker. The solutions helped Aurora reach levels of scale not possible in a real-world testing environment, which accelerated their innovation capabilities.

Moving beyond silos to “borderless” data

Integrating internal and external data and achieving a “borderless” state for sharing information is a persistent problem for many companies who want to make better use of all the data they’re collecting or can have access to in shared environments. Toner emphasized the importance of breaking down data silos to become truly data driven.

Organizations also need to explore new ways to harness third-party data from partners or customers, which increases the need for comprehensive governance policies to protect that data. Solutions such as data clean rooms are becoming more popular as a way to leverage data from outside providers, or monetize proprietary data sets, in a compliant and secure way.

AWS Data Exchange makes it easy for customers to find, subscribe to, and use third-party data from a wide range of sources, Toner said. For example, one financial services customer needed a better way to quickly find, procure, ingest, and process data provided by hundreds of vendors. But its existing data integration and analysis process took too long and used too many resources, putting at risk the bank’s reputation for providing expert insights to investors in fast-changing markets.

The company used AWS Data Exchange to streamline its consumption of third-party data, enabling teams across the company to build applications and analyze data more efficiently. AWS Data Exchange helped the firm eliminate the undifferentiated heavy lifting of ingesting and getting third-party data ready, freeing developers to dedicate more time toward generating insights for their clients.

Making analytics accessible to the masses

The consumerization of data and the broad applicability of machine learning have led to the emergence of low-code/no-code tools that make advanced analytics accessible to non-technical users.

“The simplification of tools is a crucial aspect of changing how a user prepares their data, picks the best model, and performs predictions without writing a single line of code,” said Toner. Amazon SageMaker Canvas  and Amazon QuickSight are two examples of the low-code/no-code movement in machine learning and analytics, respectively.

SageMaker Canvas has a simple drag and click user interface that allows a non-technical person to create an entire machine learning workflow without writing a single line of code. QuickSight Q, powered by machine learning, makes it easy for any user to simply ask natural language questions and get answers in real time.  

Embedding insights and experiences

Toner emphasized the importance of understanding that the types of people who need access to data across the business are expanding. “You can’t just build an analytics environment that serves a handful of developers and data scientists,” she said. “You need to make sure that the people who need data for decision making can find it, access it, and interpret that data in the moment it is important to them and the business.”

A cloud-based data strategy makes it possible to embed the power of data directly into customer experiences and workflows by making relevant data available as it’s needed. Toner used the example of Best Western, the hotel and hospitality brand using real-time analytics to give its revenue management team the capability to set room rates at any given moment. The result: improved revenue gains and the ability to be more responsive to customers.

“Best Western used to rely on static reports and limited data sets to set room rates,” Toner said. “Now, with QuickSight, they can access a much broader set of data in real time to get the insights they need to make better decisions and improve the efficiency of every team member.”

Addressing these four core components of modern analytics will help CIOs, CDOs, and their teams develop and deploy a data strategy that delivers value across the business today, while being flexible enough to adapt to whatever may happen tomorrow. 

Learn more about ways to put your data to work on the most scalable, trusted, and secure cloud.

Analytics

Companies typically face three big problems in managing their skills base: Normal learning approaches require too much time to scale up relevant knowledge. Hiring for new skills is expensive and also too slow. And skills from new hires are rarely properly shared.

Businesses of all types have fought to solve these problems. Some conduct ever more advanced offsite or onsite seminars and training – but these are costly, take time, and don’t adapt fast enough to incoming needs of the business and teams. Online training is often perceived as a hassle and participants can become disengaged. Other companies try to jump-start knowledge by bringing in consultants, but this risks only temporarily plugging the gaps.

The reality is that most of these efforts involve throwing money at only the immediate problem. Few budgets can meet the continuous need for up-to-the-minute learning and training, particularly in fast-evolving tech areas such as programming languages, software development, containerization, and cloud computing.

A fresh approach is needed

A handful of companies have found a solution. They’re adding community-driven learning to their existing training approaches. They recognize the wealth of knowledge held by individuals in their teams, and create an agile, natural process to share this knowledge via hands-on workshops. This is a logical progression from existing efforts to connect staff for social bonding and business collaboration.

In practice, what these companies do is create an open, well-managed community of trainers and trainees from within their staff base. Trainees (any employee) feed into a wish list of the specific skills and areas that they want to learn. Trainers (who are staff members with regular, non-training roles) offer lessons on skills or knowledge that they excel in. It is a system open to everyone, with managers, who understand the incoming strategic requirements of the business, helping to prioritize topics and identify potential trainers.

To succeed in this approach, businesses need good leadership and appropriate time allocation. It starts with Chief Technology or Chief Information Officers, who must endorse the importance that the company places on tech innovation, by actively facilitating employees to spend 10 to 20% of their time learning or training others. Once a learning initiative has begun and is nurtured and adapted, it often grows quickly as staff see others taking part.

The results we’re seeing from community learning at GfK

There have been some powerful results for companies running community-driven learning. At GfK, we provide consumer, market, and brand intelligence, with powerful predictive analytics. Since we began our own community-driven learning initiatives three years ago, we’ve witnessed compelling improvements. Our teams can initiate targeted, in-house training whenever necessary, with zero red tape. This has delivered a significant growth in innovation. We’re attracting and retaining top talent, and there are marked improvements in our speed of adaptability.

For example: We swapped initial hackathons for two-day learning events, run five times a year, called “we.innovate”. Our tech teams have full access to these staff-delivered interactive lessons and workshops. The skills covered are shaped by a combination of staff requests and the specific strategic needs of the business. Among the 40 or so topics on the list, we’ve already covered Kubernetes, basic and advanced usage of Git software to track code changes, domain-driven design approaches to software development, cloud computing, cyber security, test-driven development, and much else besides.

Hundreds of our staff have participated in our community learning, and we constantly encourage people to step up as trainers to keep things fresh and relevant. We measure progress by monitoring engagement levels and what the average level of expertise is per individual.

As we have experienced, this is a self-accelerating process. The scale of participation grows fast, meaning the results quickly become transformative at company level. Innovation is the currency of the future, and we are growing ours by drawing out our employees’ substantial individual expertise and distributing it as widely as possible.

To find out more about our innovation, visit gfk.com/careers

IT Leadership