A substantial shift has happened in the enterprise storage industry over the last 12 months that has changed the dialogue about storage. In past years, the first conversations with enterprise storage buyers were about cost efficiency and performance. However, today, the two most important things that come up first in storage conversations are cybersecurity and delivery time. This is a radical change that is redefining strategic planning and purchasing of enterprise storage solutions.

Storage has become part of a bigger conversation that an increasing number of decision-makers in enterprises are recognizing. It’s as if customers are waking up to a new reality – a new normal – that storage needs to be a core component of an enterprise’s corporate cybersecurity strategy, and lead times for delivery of products are longer or, at a minimum, vary by vendor.

One vendor may provide products in weeks, while another vendor will need to take many months to deliver complementary products for an end-to-end solution. Because of this, enterprise buyers and IT solution providers, who provide solutions to enterprise buyers, need to think differently.

In the past, customers and prospective customers who were interested in buying storage solutions were quick to talk about capacity, speed, IOPS, workloads, and application profiles. Storage cybersecurity would not even be discussed until the eighth conversation or later. Yet, in 2022, the first three conversations are laser-focused on cybersecurity and how storage is a critical element of an overall corporate cybersecurity strategy.

The realization that primary and secondary storage are integral to a strong enterprise cyber security posture, including immutable snapshots, fast recovery, fenced-in forensic environments, and more, casts a wide net for the one thing that keeps C-level executives and IT leaders up at night – cyber resilience (or, rather, the lack of it).

If an enterprise does not have the proper level of cyber resilience built into its storage and data infrastructure, there is a huge gap. This is why, on average, it takes an organization nearly 300 days to figure out if they have even been infiltrated by a cybercriminal.

In the work that Infinidat has done to help large enterprises increase their cyber resilience, we have learned what it takes to bring storage and cybersecurity together for an end-to-end approach.

Of course, consolidation and its dramatic impact on capital and operational expense structures are still part of these conversations in the storage market, too. As enterprises upgrade to improve their cybersecurity, they are also using the opportunity to consolidate from a high number of arrays to Infinidat’s petabyte-scale arrays.

Instead of having 50 arrays that have been built up over time, they can consolidate and use a few Infinidat arrays, while getting greater capacity, better availability, unmatched real-world application performance, and higher storage cybersecurity. Consolidation is also a major factor in advancing green IT efforts – less use of power, cooling, floor space, and resources.

Partners need to talk about storage cyber resilience and consolidation with customers, hand-in- hand. But they also need to tackle the other big conversation-starter glaring at all of us in the face – namely, the supply chain challenge that is affecting delivery times.

Customers and partners must embrace the mindset that strategic planning needs to be done earlier, and decisions will need to be made quicker. My message to customers and partners – for their own benefit – is this: talk to their suppliers earlier than they previously have.

Infinidat customers have been benefitting with us. Infinidat has been doing a superb job managing the supply chain and being able to deliver storage solutions faster than suppliers of other types of IT products, such as servers or switches.

But since the supply chain crunch has its ups and downs for all companies (as no vendor is totally immune to vicissitudes), it is smart to talk to us and your other suppliers earlier, so you will not get hit head-on with a supply chain issue.

While Infinidat is able to deliver in a matter of weeks, a server vendor may be saying it will be nine months before the new servers will arrive. The storage platforms cannot be utilized until the servers are installed. So, this is where a partner can step up and find practical solutions to get servers from another source in, for example, a third of the time.

Customers should be working closely with their partners and suppliers to be creative about how to speed up delivery timelines. It may sound like very hard work, but it will actually help prevent bigger problems down the road. There are customers ordering products now, but those products won’t arrive until Q4. They are thinking ahead. They are accelerating decisions as they map out and fulfill their strategic plans.

The functioning of their business depends on these technical and business decisions. You don’t want to have to face an irate CEO who wants to know why you can’t get IT products that are necessary to support the next phase of the company’s digital transformation initiative or elevation of DevOps or help them thwart malware and ransomware threats.

You don’t want to have to explain to the Board of Directors why the data infrastructure could not scale. You don’t want to have to face fines from a government for failure to ensure cyber resilience, leading to the exposure of sensitive data.

Don’t get caught digitally flat-footed.

To learn more, visit Infinidat.

Data Management, Master Data Management

In a bid to help enterprises and institutions in the European Union navigate data privacy, residency, and other regulatory guidelines, Oracle plans to launch two sovereign cloud regions for the European Union this year.

Unlike a generic cloud region, a sovereign cloud region is designed to offer secure data access to both private and public entities while meeting the stringent regulatory guidelines of a particular region.

Oracle’s sovereign cloud, which is a subset of its Oracle Cloud Infrastructure (OCI) portfolio, will not move customer content from the regions the customers select for their workloads and will restrict operations and customer support responsibilities to EU residents, said Scott Twaddle, vice-president of OCI product at Oracle.

“These sovereign cloud regions are also designed to further enable customers to demonstrate alignment with relevant EU regulations and guidance,” Twaddle wrote in a blog post.

The sovereign cloud regions will be logically and physically separate from the existing public OCI Regions in the EU, Oracle said.

OCI currently operates six public OCI Regions located in the EU in Amsterdam, Frankfurt, Paris, Marseille, Milan, and Stockholm.

The company is planning to migrate customers using Oracle Fusion Cloud applications within the existing EU Restricted Access cloud service to the new OCI sovereign cloud regions.

Oracle, which has said that it will continue investing in its cloud business, has planned the first two sovereign regions in Germany and Spain for the EU with both being operational by the end of this year.

The company has other sovereign regions in the UK, US, and Australia along with separate cloud regions for the UK and US defense departments. Oracle, which also runs a classified US national security cloud region, competes with the likes of AWS, Azure, IBM, and VMware in the sovereign cloud space. Last month, the company announced that it was reducing the price of its OCI dedicated region in a bid to expand its customer base.

Cloud Management, Cloud Security, Government, Government IT, Managed Cloud Services

By Aaron Ploetz, Developer Advocate

There are many statistics that link business success to application speed and responsiveness. Google tells us that a one-second delay in mobile load times can impact mobile conversions by up to 20%. And a 0.1 second improvement in load times improved retail customer engagement by 5.2%, according to a study by Deloitte.

It’s not only the whims and expectations of consumers that drive the need for real-time or near real-time responsiveness. Think of a bank’s requirement to detect and flag suspicious activity in the fleeting moments before real financial damage can happen. Or an e-tailer providing locally relevant product promotions to drive sales in a store. Real-time data is what makes all of this possible.

Let’s face it – latency is a buzz kill. The time that it takes for a database to receive a request, process the transaction, and return a response to an app can be a real detriment to an application’s success. Keeping it at acceptable levels requires an underlying data architecture that can handle the demands of globally deployed real-time applications. The open source NoSQL database Apache Cassandra®  has two defining characteristics that make it perfectly suited to meet these needs: it’s geographically distributed, and it can respond to spikes in traffic without adverse effects to its unmatched throughput and low latency.

Let’s explore what both of these mean to real-time applications and the businesses that build them.

Real-time data around the world

Even as the world has gotten smaller, exactly where your data lives still makes a difference in terms of speed and latency. When users reside in disparate geographies, supporting responsive, fast applications for all of them can be a challenge.

Say your data center is in Ireland, and you have data workloads and end users in India. Your data might pass through several routers to get to the database, and this can introduce significant latency into the time between when an application or user makes a request and the time it takes for the response to be sent back.

To reduce latency and deliver the best user experience, the data need to be as close to the end user as possible. If your users are global, this means replicating data in geographies where they reside.

Cassandra, built by Facebook in 2007, is designed as a distributed system for deployment of large numbers of nodes across multiple data centers. Key features of Cassandra’s distributed architecture are specifically tailored for deployment across multiple data centers. These features are robust and flexible enough that you can configure clusters (collections of Cassandra nodes, which are visualized as a ring) for optimal geographical distribution, for redundancy, for failover and disaster recovery, or even for creating a dedicated analytics center that’s replicated from your main data storage centers.

But even if your data is geographically distributed, you still need a database that’s designed for speed at scale.

The power of a fast, transactional database

NoSQL databases primarily evolved over the last decade as an alternative to single-instance relational database management systems (RDBMS) which had trouble keeping up with the throughput demands and sheer volume of web-scale internet traffic.

They solve scalability problems through a process known as horizontal scaling, where multiple server instances of the database are linked to each other to form a cluster.

Some NoSQL database products were also engineered with data center awareness, meaning the database is configured to logically group together certain instances to optimize the distribution of user data and workloads. Cassandra is both horizontally scalable and data-center aware. 

Cassandra’s seamless and consistent ability to scale to hundreds of terabytes, along with its exceptional performance under heavy loads, has made it a key part of the data infrastructures of companies that operate real-time applications – the kind that are expected to be extremely responsive, regardless of the scale at which they’re operating. Think of the modern applications and workloads that have to be reliable, like online banking services, or those that operate at huge, distributed scale, such as airline booking systems or popular retail apps.

Logate, an enterprise software solution provider, chose Cassandra as the data store for the applications it builds for clients, including user authentication, authorization, and accounting platforms for the telecom industry.

“From a performance point of view, with Cassandra we can now achieve tens of thousands of transactions per second with a geo-redundant set-up, which was just not possible with our previous application technology stack,” said Logate CEO and CTO Predrag Biskupovic.

Or what about Netflix? When it launched its streaming service in 2007, it used an Oracle database in a single data center. As the number of users and devices (and data) grew rapidly, the limitations on scalability and the potential for failures became a serious threat to Netflix’s success. Cassandra, with its distributed architecture, was a natural choice, and by 2013, most of Netflix’s data was housed there. Netflix still uses Cassandra today, but not only for its scalability and rock-solid reliability. Its performance is key to the streaming media company –  Cassandra runs 30 million operations per second on its most active single cluster, and 98% of the company’s streaming data is stored on Cassandra.

Cassandra has been shown to perform exceptionally well under heavy load. It can consistently show very fast throughput for writes per second on a basic commodity workstation. All of Cassandra’s desirable properties are maintained as more servers are added, without sacrificing performance.

Business decisions that need to be made in real time require high-performing data storage, wherever the principal users may be. Cassandra enables enterprises to ingest and act on that data in real time, at scale, around the world. If acting quickly on business data is where an organization needs to be, then Cassandra can help you get there.

Learn more about DataStax here.

About Aaron Ploetz:

DataStax

Aaron has been a professional software developer since 1997 and has several years of experience working on and leading DevOps teams for startups and Fortune 50 enterprises.

IT Leadership, NoSQL Databases