By George Trujillo, principal data strategist, DataStax; and Ara Bederjikian, president, Titanium Intelligent Solutions

Internet of Things (IoT) data can pour in from pretty much everywhere, be it from sensors that monitor air quality in a building, intelligent devices in a smart city, or mobile apps with an augmented reality overlay to enhance a live sporting event. IoT is embedded in our everyday lives with fitness trackers, Uber eats, Lyft, delivery tracking, security cameras, and smart thermostats. The exponential growth of IoT is providing other industries with a view of the potential growth of real-time data ecosystems.

Gathering and extracting value in real-time from diverse data generated by a very wide range of devices and other hardware poses a unique array of challenges, in both interoperability and scalability. IoT sensors, actuators, networks, and data and analytics platforms bring together the physical and digital world—and that isn’t easy.

New York-based Titanium Intelligent Solutions set out to build a global SaaS IoT platform that could handle these challenges. Doing so required a foundational, modern data technology stack that was flexible enough to support a wide range of IoT use cases in a scalable way, across geographies and across clouds. Here we’ll walk through the vision that drove Titanium’s success and an example of how Titanium put it to work.

Today’s real-time data stack

The demand for analytics and AI insights from high-growth applications, IoT devices, B2B transactions, multi-access edge computing, mobile devices, smart buildings and cities, and augmented or virtual reality is accelerating changes in data and infrastructure strategies. Organizations across industries are racing to leverage new ways to monetize the information that moves through streams of real-time data produced by all these devices and use cases. 

In a recent report, analyst firm McKinsey found that by 2030, the IoT industry could enable $5.5 trillion to $12.6 trillion in value globally, including the value captured by consumers and customers of IoT products and services.

The need to handle this wide variety of fast-moving, high-volume data has made operational resiliency, rapid growth across geographic regions, and elevating the customer experience table stakes.

Industry challenges for a real-time data stack

The gap between data-driven organizations and those striving to be data-driven is widening. A key element of success for the former: tight alignment between the business and IT. But this isn’t easy, and very few organizations achieve true alignment between technology leaders and business units. The VPs of software engineering, data warehousing, data science, data engineering, and databases often have their own preferences, technical debt, and technologies they prefer to work with. Add the cloud strategy to the application, data, and analytics strategies, and organizations end up with ecosystems that have grown into a wide variety of siloed technologies that all speak different languages. The complexity in these disparate ecosystems negatively impacts security, governance, analytics, and the value of data. A lack of alignment on the vision and enterprise-wide execution strategy is why many organizations struggle to be data driven in ways that will increase both business growth and revenue.

A vision for a real-time data ecosystem

Titanium created a vision for a real-time data ecosystem that incorporated leading-edge principles of a data stack for solving IoT data challenges. Titanium’s SaaS IoT platform delivers low latency, bi-directional communication, security at every link in the data chain, real-time data, historical data, and scalability. With a real-time data ecosystem, Titanium provides data necessary for ESG (environmental, social, and governance) reporting, analytics, operational management, artificial intelligence, and automation.

The company turned to the open-source NoSQL database Apache Cassandra®, which is known for its rock-solid performance, scalability, and reliability. For enhanced security and scalability, Titanium worked with DataStax to use their managed database service Astra DB, built on Cassandra. Astra DB’s multi-data model, multi-cloud, and multi-use case capabilities help Titanium focus on delivering customer value versus having to support a complex data ecosystem. 

Data interoperability requires seamless collaboration for data integration and correlation across business units, so Titanium built a unified IoT and IT network offering that increased efficiency and control—for itself and for its customers. Because the data is in the cloud, it’s accessible for a variety of uses while meeting security and privacy requirements.

Titanium also provides information that isn’t typically found in building automation systems,  including heating degree days and cooling degree days, climate zones, and more. These metrics can be relevant to ESG reporting—and it further enhances cross-departmental use. The ESG real-time dashboard is used by building managers to monitor and analyze building performance, while corporate ESG teams can use the data to meet sustainability goals. This increases the value of data across customer business units and regions.

Case study: Scaling a climate control system nationwide

In general, IoT companies focus on delivering functionality for building services with hardware solutions. Hardware solutions are often closed-loop systems that require the end user to use proprietary hardware, locking the customer in for the life of the product; this can have a significant impact on the speed and business benefits of integrating with other devices. As a result, industry IoT platforms often have fixed, limited functionality. IoT companies often lack interoperability and scalability, making it difficult to scale seamlessly across many locations and regions.   

Titanium sought to build a scalable, interoperable cloud-based data stack through a partnership with DataStax. The company’s SaaS platform required the flexibility to support customer operating models across different geographical regions. Standardizing on a streamlined, multi-model, multi-purpose data ecosystem was important to reduce data integration complexity and change management time to deliver faster business value from real-time data. The ability to supply customers with analytics and AI capabilities was a critical part of the data ecosystem design.

Titanium’s global cloud IoT platform required a high-speed database to support future growth in volume and velocity across geographical regions for the growing IoT industry. Low latency for real-time data was also essential for automation; time delays could result in making automated decisions that were outdated. Latency makes automation very challenging, if not impossible. People are used to manually flipping a switch to turn lights off – with no delay. A delayed response would be a roadblock to people using a cloud-based platform. 

Low latency for real-time data is also essential for when operating across multiple locations. If changes are made in various locations by different people, or if simultaneous automated actions and communications are delayed, it can be frustrating, and even lead to incorrect actions. A global real-time data platform requires multiple locations to work as one seamless data ecosystem with low latency being a priority. 

Using AI, Titanium can route data based on the strongest signal strength to ensure uninterruptible communication. IoT data feeds AI models; AI can identify hardware devices and commission them in the platform. This can be done remotely, eliminating a person physically at the location to manually commission devices. Predictive maintenance is also used to identify devices that are perpetually running or have not been active, indicating a performance issue.

Titanium also offers a sophisticated ESG dashboard that provides user-friendly advanced analytics. For example, it enables the comparison of multiple metrics to identify drivers such as CO2 levels, which can indicate insufficient ventilation.

A nationwide distribution company customer approached Titanium with the need to design a scalable climate control platform with a wide range of operating capabilities. Their distribution centers range from 500,000 to a million square feet and are in over 40 different states in the U.S. The company didn’t have a centralized, remote way to access its building functions in distribution centers across the U.S. 

The customer was looking for a centralized climate control system that could be measured, controlled, and monitored with real time data and analytics in addition to ESG reporting with predictive AI-based maintenance. In addition, given the lack of visibility to their assets, it was preventing them from adding corporate governance to save energy and reduce its carbon footprint by measuring real-time energy consumption and reporting the data into one dashboard. 

Titanium offered the company an interoperable platform with remote, single-user access for all their climate control operations. A design focus on data integration made it simple to manage all climate control systems in 50 distribution centers from one real-time dashboard. Titanium’s solution easily across their customer’s locations, saving the customer at least 15% in energy costs. The Titanium cloud-based platform helped eliminate siloed data, providing greater cross-departmental use and strengthening corporate governance.

An aligned vision

The IoT industry is continually evolving to support a wide range of use cases and operating models. Having a vision that aligns business and IT leaders on an execution strategy is key to building a data operating model that drives business revenue and growth. Building a streamlined, trusted, and reliable data ecosystem is the foundation for delivering analytics and AI results at the speed Titanium customers need for increasing business growth and revenue.

Learn more about DataStax here.

IoT Platforms, IT Leadership

On July 8, 2022, a botched maintenance update on the Rogers ISP network in Canada crashed internet access across the country for at least 12 hours, with some customers experiencing problems for days afterward.

The impact was profound. The nationwide outage affected phone and internet service for about 12.2 million customers – about 25% of Canada’s internet capacity – halting point-of-sale debit payments on the Interac network, preventing Rogers mobile phone users from accessing 9-1-1 services, disrupting transit services dependent on online payment, and even wreaking havoc on traffic signals in Toronto dependent on cellular GSM for timing changes.

Adding insult to injury, the outage even forced Canadian musician The Weeknd to postpone the first stop on his world tour at Toronto’s Rogers Centre.                              

Shaun Guthrie, Peavey Mart Senior VP of Information Technology and VP of the CIO Association of Canada

Peavey Mart

The cause? As was subsequently revealed in Rogers’ submission to regulator Canadian Radio-television and Telecommunications Commission, the update “deleted a routing filter and allowed for all possible routes to the Internet to pass through the routers. … Certain network routing equipment became flooded, exceeded their capacity levels, and was then unable to route traffic, causing the common core network to stop processing traffic.”

Although Rogers – one of Canada’s major internet, broadcasting, and mobile wireless companies – restored service to most customers within a day, the catastrophic loss of service startled Canadian businesses. Some, like the approximately 100 outlets operated by farm and agriculture supply retailer Peavey Mart, had redundant access to other internet providers already in place.

As a result, “only two stores were directly impacted where they had no internet connectivity,” says Shaun Guthrie, the company’s Senior VP of Information Technology and VP of the CIO Association of Canada.

“However, we rely on Interac services for our customers to transact, which relies solely on Rogers, so we lost the ability to do debit card payments.”

Not just a domestic issue

“Some of the non-profits that I serve lost the ability to record meeting the needs of vulnerable people for a day or two,” says Helen Knight, Virtual CIO and Strategic Technology Consultant for Canadian non-profits. “Personally, my children and I had no way to communicate. My 13-year-old daughter was out until 10 p.m. and I was worried she had no way to get home.”  

Others were not so fortunate. “As a global company producing waterslides and water park attractions, the Rogers network outage did affect us more than we originally thought,” says Chris Palsenbarg, Manager of IT Operations and Help Desk Support with WhiteWater West Industries. “Staff travelling overseas couldn’t even use their phones.”

Sapper Labs Group is a Canadian cybersecurity/cyberintelligence firm. “Although our company was not affected by the Rogers outage, many of our partners, clients, and competitors were,” says Dave McMahon, Sapper Labs’ Chief Intelligence Officer. “Some organizations have yet to fully recover. This has had a ripple effect through the market.”

In the wake of the Rogers outage, Canadian CIOs and IT executives and experts are reviewing their readiness to cope with such failures in the future. Their conclusions are worth noting by CIOs everywhere, all of whom are at risk of encountering similar service outages in their own countries, whether from system issues, intrusions, or power failure due to environmental or other causes.

Build redundancy

Chris Palsenbarg, Manager of IT Operations and Help Desk Support with WhiteWater West Industries

WhiteWater West

The Rogers outage underlined the value of having redundant ISP access, even though doing costs more than relying on just one. Although some corporations balk at this extra expense, Peavey Mart accepts the value of paying for redundant internet access wherever possible. The company was rewarded for its farsightedness on July 8, 2022.

The failure of the Rogers ISP network didn’t blindside the company either, because “we proactively monitor the state of our data communications,” Guthrie says. “As a result, once the stores were impacted by the outage, they automatically failed over to their secondary ISPs through our SD-WAN enabled infrastructure.”

Non-profit organizations such as Canada’s Salvation Army can’t afford the kind of infrastructure used by Peavey Mart. But their CIOs are determined experts accustomed to “accomplishing amazing feats using free software and donated hardware,” says Knight. “They are accustomed to their aged IT infrastructure failing, so they usually have a manual process to fall back on,” she says.

As a result, Canadian non-profit CIOs can cope with ISP failures, at least at the time they actually occur. “The lost data from the outage will impact them later, when they don’t have correct records showing how many people they served to show their donors, potentially impacting future grants,” Knight says. 

This being the case, Knight believes the Rogers outage could change non-profit attitudes to redundant ISP access for the better. “After all, it has been common practice for years to have a redundant connection for all critical business components, so the silver lining is that now non-profits understand a new risk area they may not have considered,” she says.

“So if this is the incident that allows non-profits to recognize the need to have a senior technology leader at the decision-making table, aligning their strategic plans to their technical roadmap, then this might well be the cheapest and easiest way to learn that lesson. It is much better than facing a cyber breach!”

Check your suppliers’ backup plans

For Sapper Labs, “the Rogers outage reinforced our confidence in our own architecture and mode of operation,” McMahon says. But this sense of confidence reinforced the point that a company’s IT infrastructure doesn’t exist in isolation. Instead, it is one link in a chain of ISPs, cloud platforms and others who connect to the enterprise via the internet.

Thus, “the takeaway from the Rogers outage is to ensure that one’s supply chain, partners and clients are equally prepared and that there are contingencies in place to assist them in maintaining business operations,” he says. “What was enlightening was that the outage immediately revealed who was a Rogers customer, whether they have alternate means of communications, their level of cybersecurity maturity, and critical interdependencies across the ecosystem.”

Dave McMahon, Chief Intelligence Officer of Sapper Labs

Sapper Labs

Peavey Mart is equally diligent about checking for vulnerabilities in its data supply chain. “We ask all our cloud providers; do they have redundancy?” says Guthrie. “Do their systems have failovers to backup systems built in, and do they have things like business continuity plans in place so that when a failure occurs, their people know what to do? And we ask those questions up front.”

Unfortunately, retailers like Peavey Mart don’t have the clout to demand such answers from Canadian interbank megacorps like Interac. “As a result, we have no choice but to assume that Interac has such backup measures in place, which they clearly did not,” he says.

Expect more ISP failures

The resolution of the Rogers outage in Canada was followed by government investigations, negative media reports, and lots of predictable public outrage. But none of these reactions will be able to change a very simple fact: ISP networks are complex and vast systems made of many parts whose response to maintenance upgrades cannot be completely modeled in simulations.

As a result, even after all the improvements Rogers has promised to make and that other Canadian ISPs might copy out of a sense of prudence, “I have no doubt that we’ll probably see additional failures,” says Guthrie. “I don’t know who it will be, but I think we will likely see an additional failure within a year.”

This being the case, CIOs whose companies rely on ISP access need to take steps now to protect their enterprises against such outages. According to Dave McMahon, the path forward is clear: “Dual providers and redundant independent systems are best practices in industry,” he says.

“It is the very definition of a high-availability system. This is why all Sapper Labs employees already have multiple means of secure communications and abilities to collaborate online. We are currently assessing how best we can extend similar secure high-assurance solutions to our clients and partners.”

At the same time, CIOs need to remain humble and not overestimate their ability to plan for such events beforehand.

“Technology is so ubiquitous and so complex, with every person and every organization experiencing new and complex technical challenges over the last couple years, that although it is possible to protect companies against Rogers-style outages it isn’t possible or cost-effective to protect against all risk,” says Knight. “Instead, it is a matter of quantifying the impact and urgency of each risk and prioritizing organizational continuity plans for the most critical operational areas.”

The bottom line: A Rogers-style ISP outage is a crisis that can and likely will confront CIOs in companies around the world in the years to come. This is why boosting redundant systems and preparing contingency plans now is a must, to minimize and mitigate the inevitable impact of these communication failures on the enterprise.

Disaster Recovery, Internet Service Providers, IT Leadership

Following increasing numbers of cyberattacks on Western energy companies, Hydro-Québec decided to conduct a world-first “electrical containment” exercise this summer.

For four hours, the state-owned company managed to completely isolate itself from the internet without sustaining any service failure. A reassuring exercise for its customers in Québec but also for those in New Brunswick, Ontario, New England, and New York State – where it fulfills up to 15% of power needs.

Where and when did this experiment take place?

“We have to keep that a secret, but I can tell you it made me very nervous,” says Jean-François Morin, VP – Information and Communications Technologies, who oversaw the exercise from start to finish. “What kept me awake was forgetting a machine somewhere or cutting off some customers by mistake.”

This secret exercise is more than just a cybersecurity milestone: it’s part of Hydro-Québec’s plan to push its digital shift – a key component of its 2022-2026 strategic plan.

“Our meters generate one billion bits of data per day, and this number will go up very soon. We’re developing huge models to manage all this information and make it talk. But it won’t work unless our cybersecurity is foolproof.”

Ethan Cohen, Gartner VP and analyst for Utility Transformation and Innovation, doesn’t conceal his admiration for the government-owned corporation.

“There is an element of show in such an exercise, but it demonstrates a level of competence that most utilities would like to have. It shows that Hydro-Québec is not merely sustainable, but resilient,” Cohen says.

“A lot of utilities formulate grand strategies, but the issue is executing them. What matters is that the CIO actually does it.”

Transformation, phase two

For Hydro-Québec, its current digital transformation is the second phase of an energy transition plan that began in the 1970s, when the utility was able to use its large production of green hydropower to satisfy almost all residential demand, including heating. But today, the company needs to work on the government’s new requirement – electrifying all transportation by 2040.

Jean-François Morin, VP – Information and Communications Technologies, Hydro Québec

Hydro Québec

“To reach this goal, we’ll have to build an IT ecosystem that allows us to better predict and control power demand, but also to use the production capacities of residential, commercial and industrial customers,” Morin says. “There’s going to be AI everywhere – and our operators are going to become computer specialists.”

Hydro-Québec produces, moves, and distributes about 180 terawatt-hours per year – more than almost any other utility in North America. The digital transition will be on a similar scale.

“Of all viable solutions,” Morin says, “the most profitable will be the use of data. Analytics will help optimize maintenance and consumption but also automate production and decision-making, including the analysis of new infrastructure projects.”

The Université du Québec and Université de Sherbrooke computer science and finance graduate gives the example of maintenance, which costs billions every year – and currently follows “blind” procedures.

“We’re now changing parts that we don’t really have to change. By putting sensors everywhere, we’ll be able to control what’s going on, to collect histories, and intervene where it’s really needed.”

Most of the utility’s 20,000 employees’ jobs will be affected in some way by the digital shift. With the installation of smart meters, the job of meter reader has already disappeared, although Hydro Québec hired more computer technicians. Among other things, this new technology made it possible to better detect energy theft by analyzing clients’ consumption in real time and comparing it to that of neighbours.

“That was just the beginning,” Morin says. “One of my roles is to identify the jobs of the future, what we’ll need in terms of AI and IT specialists.”

“Hydro-Québec has a tradition of innovation and R&D,” Cohen says. “It’s a very entrepreneurial organization at a level you don’t see elsewhere. And they’re willing to shake up the way they do things to achieve real breakthroughs.”

Eliminating waste

The digital shift will be key for solving Hydro-Québec’s biggest problem: waste. In a way, the company is the victim of its own success: because it offers North America’s cheapest and greenest energy in massive quantities, it has created a class of ultra-dependent, hyper-hungry consumers who are devouring energy that could be put to better use – to electrify transportation and industrial processes, for example.

“There are people warming their driveway to melt the snow and heating their outdoor Jacuzzis all week long in the winter,” says Morin, who will play a key role in planning how to get customers to use power more efficiently, especially during peak hours – which are very costly.

Hydro’s residential customers currently receive no warning about the actual cost of their unbridled consumption. “We need to develop ways to better inform them of their use. Somewhat like Tesla, which is very good at telling its clients how much they saved in their journey. My dream would be for all customers to get notifications at 4 p.m. about what they’re paying for and how much they’d save by turning off their pool or water heater for a few hours.”

Such goals are not limited to data management; infrared drones, for example, could produce a stunning view of the consumption profile of the most energy-hungry customers. Pricing would also be a powerful awareness-raising tool if changes in habits are matched by real savings, says the company’s VP, who has worked his way up through the ranks since starting as an IT project manager in 1999.

The transition won’t apply only to IT processes: Morin’s office is now involved in all the company’s fundamental policy decisions. Hydro-Québec will have to meet major power needs in the next 50 years, and its management has promised to consider all possible avenues to delay the building of new hydroelectric megaprojects.

“We’re juggling a lot of new ideas, like scaling existing dams, putting in more efficient turbines, but also solar roofs, which look good, or small residential windmills that would allow customers to produce their own electricity and even power the grid at certain times.”

“Hydro-Québec may be further along in the energy transition than other utilities, but they still have to address the current need for self-production,” Cohen says. “There are many new opportunities that have to be analyzed.”

A radical, careful transition

Morin believes Hydro-Québec could move much faster in its digital shift, but he’s holding back on purpose because of cybersecurity and privacy issues. This cautious approach, according to Cohen, is not a bad thing: “There would be big advantages to moving fast but for utilities, the regulatory environment is an inescapable reality.”

A company that sells electrons is particularly susceptible to “malicious” electrons. The more sensors or home automation services it provides, the more exposed it becomes to hackers.

“We need to think ahead,” Morin says. “We could monitor consumption in every home. We could hire bigwigs in Paris and New York to work remotely. But our participation in the North American energy market requires us to comply with strict reliability rules.”

Now that he has successfully achieved the ultimate electric containment, Morin believes cybersecurity will be one of the first AI applications in grid management and control.

He also must deal with Québec’s new data privacy regulations, the most advanced in North America. Like any company that manages information about Québec’s residents, Hydro has to guarantee that these data are protected. Neglecting to do so can be very costly – up to 4% of a company’s worldwide revenue.

Legal and Regulatory Affairs must therefore validate Morin’s decisions. “Do I have the right to use such and such data for such and such application?” he asks – explaining that he had to deliberately slow down the expansion of the Hilo smart-home subsidiary because of these issues.

“We need very robust data governance to make sure we comply with the law but also to determine what data is actually useful. Even with AI, the old law of computing applies: ‘garbage in, garbage out’.”

Translation by Daniel Pérusse

Digital Transformation, IT Leadership, Utilities, Utilities Industry