To meet the rapidly growing demand for its cloud services, Oracle has announced plans to open a third public cloud region in Saudi Arabia. Located in Riyadh, the new cloud region will be part of a planned $1.5 billion USD investment from Oracle to expand cloud infrastructure capabilities in the Kingdom. 

The new region in Riyadh will join Oracle’s existing cloud region in Jeddah and a planned Oracle cloud region in the new city of NEOM. Oracle will also expand the capacity of the Jeddah region, which opened in 2020.

In an interview with CIO Middle East last year, Leopoldo Boado Lama, senior vice president of business applications for Eastern and Central Europe, Middle East and Africa, made clear that the UAE and Middle East are priority regions for Oracle. He noted that the company was investing heavily to enhance their infrastructure, physical presence, partner network, human resources, and other support capabilities in this region.

The company is bringing on 3rd May to Abu Dhabi Oracle’s annual flagship event, CloudWorld Tour to support the growth of Abu Dhabi and the UAE’s digital economy.

“Oracle is fully committed to help the UAE achieve its development objectives with latest cloud technologies, and we are delighted to bring the tech industry’s most definitive event to Abu Dhabi. Oracle Cloud World Tour Abu Dhabi will provide an inside look at how organisations are solving their most complex business challenges with latest innovations in cloud infrastructure, databases, and applications across diverse industry sectors,” says Nick Redshaw, Senior Vice President, Technology Cloud, Middle East and Africa, and UAE Country Leader, Oracle. 

Oracle Cloud has seen stellar growth over the past few years in the Middle East with several hundred new cloud services and features rolled out. It is continuing to see organisations from across the region turn to Oracle Cloud to run their most mission-critical workloads in the cloud.

The company’s SVP for Technology Software, Middle East, Africa, Turkey and Levant, Cherian Varghese, says the investment in the third data centre is due to ongoing demand and in preparation for the region’s future growth. 

“Oracle has been the only adaptor of cloud in the Kingdom. We set up our first data centre in Jeddah before and then [the company invested in] NEOM, so we already have two data centres in the Kingdom,” says Varghese. “Now we invested 1.5 billion dollars for our third data centre in Riyadh. This means we are going to expand our Jeddah region more because Saudi Arabia is moving in a big way to digital transformation and the cloud update is really good so no matter what the capacity we are putting in, we are getting more demand for future growth as well.”  

“KSA has always been a big market for Oracle’s business. We have been in this country for 30 years and we have good, loyal customers in the Kingdom – from telcos and banking to the public sector. The good news is that there is a big push for digital transformation coming directly from the Government which means the industry has become more agile,” he adds.

Businesses are feeling growing pressure to act on climate change from all angles. However, despite data centres and transmission networks being responsible for nearly 1 per cent of energy-related greenhouse gas emissions, a new Deloitte study reports little over half (54 per cent) of businesses have converted to energy-efficient technologies.

This number is concerning given emerging digital technologies such as blockchain, IoT, artificial intelligence, and machine learning are increasing demand for data centre services further, as workloads are no longer confined to the core data centre and can run anywhere, including the edge. Australian businesses need to transition to sustainable IT solutions to support these emerging technologies while staying in line with Australia’s new commitment to an emissions reduction target of 43 per cent and net zero emissions by 2050.

New servers form the foundation of sustainable infrastructure, offering greater performance while taking up less space and consuming less energy – driving sustainability goals while enabling industry innovation.

Sustainable IT infrastructure is no longer just a nice-to-have

In the past, businesses sought IT systems that delivered the most ROI or the highest efficiency – however, with new local and global emissions reduction targets in place, this is no longer enough. IT infrastructure must run at the smallest possible carbon footprint with minimum environmental impact to meet Environmental, Social and Governance (ESG) goals and comply with government demands for sustainable innovation.

It’s not just the public sector pushing companies to change. A Google Trends search reveals Australians and New Zealanders are 3rd and 4th most interested in sustainability worldwide, with eight out of ten Australian consumers now expecting businesses to operate sustainably. Four in ten say they’ll stop purchasing from brands that don’t. Consumers want more from companies than they have in the past – and the right IT infrastructure is essential to meeting these expectations. A recent research commissioned by Dell Technologies focused on Gen Z adults aged between 18 to 26 confirms this sentiment. Nearly two-thirds of Gen Z adults in Australia believe technology will play an important role in overcoming the biggest societal challenges, such as the climate crisis.

Transitioning to newer servers can form the basis of a modern, sustainable IT set-up, appeasing customers and keeping pace with government legislation. For example, Dell’s edge servers can operate up to 55 degrees Celsius. This allows the technology to run at warmer temperatures, meaning there’s no need to cool the room down to keep the servers operational, which is true of older server models. The result is advanced power management control and reduced power consumption, which is not just a nice to have; it’s essential.

Enabling emerging tech at the edge

The infrastructure must also support emerging technologies. This is critical in Australia to meet the continuing growth in demand for data and connectivity from industries like agriculture and healthcare that are relying on new tech to operate efficiently over vast swaths of land in remote locations. These industries are embracing emerging technologies, with data processed at the edge, to overcome ongoing supply chain issues in the unique and often harsh Australian climate and landscape.

In rural locations, latency matters, and technology must be brought closer to improve efficiency. However, the most significant opportunity for edge computing in Australia is its ability to support AI and automation, which will support and grow these industries.

For example, TPG Telecom trialed AI-enabled image processing, computer vision and edge computing technologies to enable multiple high-quality 4K video streams to count sheep at a regional livestock exchange, automating the process and removing human error.

In Australian healthcare, individuals seeking services can travel hours to receive critical care. Reports in deeply remote locations say it can take up to 14 hours to reach a fully equipped hospital. Edge computing, together with emerging tech, enables rural access to digital health services and improves operations in major regional hospitals.

Townsville University Hospital in North Queensland is leading by example, harnessing low-latency and high-input/output operations per second (IOPS) storage at the edge to deliver better regional care. The new servers support emerging technologies, including AI, to improve ward management and patient flow reporting systems in a location cut off from cloud computing services available in metropolitan cities. Staff can now perform near real-time reporting, improving efficiency and access to current information to improve outcomes in the remote and indigenous communities it services. 

Innovative solutions like these are only possible with efficient servers that can handle high bandwidth and low latency workloads close to the data source. Next-generation technology architectures must support and accelerate modern workloads and serve the industries our economy relies on, whether on-premises in data centres or at the edge in remote locations – and they need to do it while being sustainable.

Supporting sustainable innovation

Dell Technologies’ latest generation of PowerEdge servers support sustainable innovation, providing the foundation for an energy-efficient IT system while enabling emerging tech.

Designed with a focus on environmental sustainability, they’re providing customers with triple the performance over the previous generations of servers. This means more powerful and efficient technology with less floor space required. They’re built with the Dell Smart Cooling suite, which increases airflow and reduces fan power by up to 52 per cent compared to previous generations, delivering performance with less power needed to cool the server.

To further reduce the carbon footprint, the servers use up to 35 per cent recycled plastic and are designed so components can be repaired, replaced, or easily recycled. Customers can also monitor carbon emissions and better manage their sustainability targets using the Dell OpenManage Enterprise Power Manager software.

The new PowerEdge servers are built to excel in demanding tasks, from AI and analytics to massive databases, supporting modern workloads and industry innovation – even in remote Australian locations. The servers can be used as a subscription via Dell APEX. Customers can adopt a flexible approach to avoid the expense of having more computing resources than they need, which is beneficial for increasingly tight budgets and sustainability efforts, reducing unnecessary energy consumption.

With new tech, we can have our cake and eat it too  

It seems like asking for a lot; powerful infrastructure that can enable the latest advancements in tech, improve efficiency and support Australian industries operating in remote locations over large geographic areas. We’re asking tech to deliver this while meeting ESG goals and aligning with Australia’s new carbon emissions targets. But the new reality is IT infrastructure must be sustainable while maintaining high performance.

It’s not just a wish list; the tech is available. Adopting next-generation servers that can handle it all will enable Australia to meet its carbon goals while driving the innovation our industries need to thrive.

Infrastructure Management

A lawsuit has been filed against 13 current and former IBM executives, including CEO and Chairman Arvind Krishna and former CEO Ginni Rometty, accusing the company of securities fraud — bundling mainframe sales together with those of poorly performing products in order to make them appear more profitable than they actually were.

The lawsuit was filed on January 13 in the U.S. District Court for the Southern District of New York,  and seeks class action status for anyone who purchased IBM shares during the period April 4, 2017, to Oct. 20, 2021.

The complaint alleges that the company and some of its executives “knowingly or recklessly engaged in a device, scheme, or artifice to defraud, engaged in acts, practices, and courses of business conduct designed to deceive investors.”

Essentially, it’s alleged that IBM promoted its less popular cloud, analytics, mobile, social, and security products (CAMSS) products as “growth opportunities,” allowing investors to think they were much in demand when, in fact, they were being tacked onto three- to five-year mainframe Enterprise License Agreements (ELA) that were popular with large banking, healthcare, and insurance company customers.

“Defendants misled the market, engaging in a fraudulent scheme to report billions of dollars in mainframe segment and other non-strategic revenues as Strategic Imperatives and CAMSS [“Cloud,” “Analytics,” “Mobile,” “Security,” and “Social,”] revenues, enabling Defendants to report publicly materially distorted segment information,” the lawsuit states. “Defendants portrayed Strategic Imperatives and CAMSS as growing materially beyond actual growth, materially misrepresenting IBM’s shift away from its stagnant legacy mainframe segment.”

According to IBM, “strategic imperatives” are products and initiatives that provide “differentiation driven growth and value.”

IBM is also alleged to have reallocated revenue from its non-strategic Global Business Services (GBS) segment to the company’s Watson-branded AI products — a strategic imperative included in the CAMSS product portfolio — in an attempt to convince investors that the company was successfully expanding beyond its legacy business. As a result, “IBM securities traded at artificially inflated prices” resulting in financial damage to people purchasing company shares during the period covered by the lawsuit, according to the lawsuit.

In response to a request for comment, IBM emailed a statement that said, “IBM’s long-standing commitment to trust, integrity and responsibility extends across all aspects of our business operations. A similar complaint has already been voluntarily dismissed.” 

 In fact, the same complainant who filed the lawsuit last week — the June E. Adams Irrevocable Trust for the benefit of Edward Robert Adams and others who may join the lawsuit   — filed a similar lawsuit last April, then filed a notice in September moving for voluntary dismissal of the case “without prejudice,” reserving the ability to refile the suit.

The reason behind the move to abandon that case was due to disagreement with the lead law firm at the time about how to handle the case, according to The Register, which first reported on the new case filed last week. The law firm submitting the new lawsuit, The Rosen Law Firm, declined to comment.

The case filed last April alleged that IBM had bolstered its stock price and deceived shareholders by moving revenue from its non-strategic mainframe business to its strategic business segments. This previous lawsuit further alleged that by misrepresenting the true nature of CAMSS revenue, it allowed IBM executives to take home larger bonuses than they otherwise would have received.

While this new lawsuit once again alleges that IBM strategically shifted revenue, it omits the accusation related to executive bonuses.

According to the PACER electronic records system, the new case has been referred to District Judge Vincent L. Briccetti, who will have to decide whether to certify class-action status for the lawsuit.

Briccetti is currently adjudicating another ongoing lawsuit filed against IBM. In that case, filed in March last year, Gerald Hayden, an ex-IBM employee, accuses IBM of theft of trade secrets and intellectual property.  Hayden alleges that, while he worked for IBM, the company unlawfully used his proprietary business method — A2E — that he had developed to streamline enterprise sales.

Hayden’s lawsuit alleges that IBM, after promising it would protect his trade secrets, used A2E on projects that he was not working on, moving some of his clients to new projects in areas of the company including cloud and Watson — essentially transferring clients that he had attracted via the A2E methodology from GBS to newer strategic projects.

“IBM thus used A2E’s value proposition to drive IBM’s claimed reinvention of itself as a leader in the hybrid cloud computing industry and as an invaluable consultant to the financial services,” according to the lawsuit. “To add insult to injury, after stealing Plaintiff Hayden’s proprietary A2E business methodology and stripping him of his client base, IBM shortly thereafter terminated Plaintiff for ‘lack of work.’ “

(Additional reporting by Marc Ferranti.)

IBM, Legal, Technology Industry

A recent spate of high-profile security breaches at some of the largest enterprises in Australia has reminded everyone of the importance of security. Cyber crime is estimated to cost the Australian economy around $42 billion per year, and that number is only increasing.

The biggest challenge when it comes to cyber crime is that there are so many different security risks to manage. Three of the biggest risks moving into 2023 and beyond are:

Ransomware – in which a malicious program infects a computer, locking access to all files until a ransom is paid to gain access to an unlock key. Most ransomware programs, once they’ve infected one computer, will proliferate across the network, and lock down the entire organisation’s IT environment. Of course, even if the ransom is paid and the key received, there’s no guarantee that other malicious code won’t remain on the devices to continue to gather data for the criminals. Ransomware often starts from something as humble as someone in the organisation downloading the wrong file from an email.Misconfigurations and unpatched systems – cyber criminals can purchase tools from dark Web marketplaces that will scan IT networks and devices for poor configurations and unpatched systems that they can exploit. This has become a particular concern with more people working remotely (and therefore away from the IT support team) during and post-pandemic. In many cases, the management of patching for remote devices has been less robust than it should be.Social engineering – with social engineering, the cyber criminal will “trick” a victim into releasing confidential information, such as passwords and other logins. They achieve this via several means, but one of the most common is via phishing, which typically involves convincing someone to download a piece of malware from a legitimate-looking email that will then gather login data and other sensitive info that can give the criminals access to much more within the organisation.

In all three cases, the cyber criminals are gaining access via the endpoint devices. While firewalls and other “perimeter” security defences remain critical for protecting and organisation and its assets, there has been a renewed focus placed on the importance of endpoint defences, because it is that individual’s vulnerability that is too often the easiest thing to exploit.

Endpoint security needs to a multifaceted-approach

“Endpoint security” means more than an anti-virus installed on the computer. A truly robust endpoint solution will provide protection at all levels of the device, from the core BIOS, through to the hardware, firmware and application layers.

This is what Intel has aimed to deliver with the Intel vPro® platform. The vPro® platform encompasses performance, manageability, and security, and in security aims to cover endpoint devices at all stages – below the OS, above the OS and at the application layer.

It starts with total component traceability that starts at the factory floor. Meanwhile, vPro® features attestable security status, meaning that it uses static and dynamic root-of-trust measurements in the Intel Trusted Platform Module that confirms below-the-OS security to detect abnormalities.

On the hardware layer, Intel boosts the security of devices with total component traceability that starts at the factory floor. Meanwhile, the secure boot-up tool in vPro® means that only untampered firmware and trusted OSes will load, preventing compromised devices from connecting to the network in the first instance.

vPro also boosts security for virtualised environments. Organisations can run virtual machines for security-based isolation with application compatibility, across different operating systems. In addition, virtualised security software, such as Windows Defended Credential Guard and Application Guard are boosted through Intel’s own virtualization capabilities. This delivers superior protection against kernel-level malware through to browser-based attacks.

At the application layer, vPro® features a hardware-isolated Key Locker to enable password-less sign-ins (useful for mitigating the risk of social engineering tricking the employee into giving away their password). vPro® also features total memory encryption that has been designed to mitigate against the risk of cold-boot attacks and isolate compromised applications.

Finally, AI-driven CPU threat monitoring has been designed to detect malware that has slipped past the anti-virus. Intel has also integrated the Threat Detection Technology with the major mobile device management software options, to extend these capabilities holistically to all technology that might be interacting with the network.

Building a holistic endpoint security practice

While the Intel vPro® solution has been designed to be a powerful and robust baseline security for endpoint devices, the reality is that security at the end point needs to be a proactive and ongoing effort by organisations. This is particularly true with so many devices connecting to company networks remotely.

vPro® will be most effective when backed by several best practice policies, including:

A zero-trust approach to user privileges. Administrators should maintain tight control over the access that users have when accessing sensitive data and parts of the network. This means have a robust approach to access rights by device and user, and administrator permissions should be reserved for specialised users.Remote deployment of patches and updates. There are tools available to IT teams to remotely access PCs and deploy patches. The goal here needs to be to make patching as seamless as possible for the end user, and not rely on their input.Ongoing training of employees. Ultimately the best defence of all is to train employees so they know the security red flags to watch out for. Research from Stanford University found that around 88 per cent of all data breaches occur because of human error. Solutions such as vPro® can help to mitigate against this risk, but an ongoing training regimen across the organisation is of equally critical importance.

Following the recent wave of data breaches, the Australian government has committed to increasing the penalties for organisations that have been impacted by poor security practices. These penalties are now stiff enough to be an existential risk to many organisations. Investing in security solutions that address the gateways to the organisation’s data, as endpoint solutions do, and combining that with a renewed approach to security policy and training, will be a critical way for a business to protect itself into 2023 and beyond.

For more information on the security features of vPro®, click here.

Cyberattacks, Cybercrime

Most organizations understand the profound impact that data is having on modern business. In Foundry’s 2022 Data & Analytics Study, 88% of IT decision-makers agree that data collection and analysis have the potential to fundamentally change their business models over the next three years.

The ability to pivot quickly to address rapidly changing customer or market demands is driving the need for real-time data. But poor data quality, siloed data, entrenched processes, and cultural resistance often present roadblocks to using data to speed up decision making and innovation.

We asked the CIO Experts Network, a community of IT professionals, industry analysts, and other influencers, why real-time data is so important for today’s business and how data helps organizations make better, faster decisions. Based on their responses, here are four recommendations for improving your ability to make data-driven decisions. 

Use real-time data for business agility, efficient operations, and more

Business and IT leaders must keep pace with customer demands while dealing with ever-shifting market forces. Gathering and processing data quickly enables organizations to assess options and take action faster, leading to a variety of benefits, said Elitsa Krumova (@Eli_Krumova), a digital consultant, thought leader and technology influencer.

“The enormous potential of real-time data not only gives businesses agility, increased productivity, optimized decision-making, and valuable insights, but also provides beneficial forecasts, customer insights, potential risks, and opportunities,” said Krumova.

Other experts agree that access to real-time data provides a variety of benefits, including competitive advantage, improved customer experiences, more efficient operations, and confidence amid uncertain market forces:

“Business operations must be able to make adjustments and corrections in near real time to stay ahead of the competition. Few companies have the luxury of waiting days or weeks to analyze data before reacting. Customers have too many options. And in some industries — like healthcare, financial services, manufacturing, etc., — not having real-time data to make rapid critical adjustments can lead to catastrophic outcomes.” — Jack Gold (@jckgld), President and Principal Analyst at J. Gold Associates LLC.

“When insights from the marketplace are not transmitted in real time, the ability to make critical business decisions disappears. We’ve all experienced the pain of what continues to happen with the disconnect between customer usage metrics and gaps in supply chain data.” — Frank Cutitta (@fcutitta), CEO and Founder, HealthTech Decisions Lab

“Operationally, think of logistics. Real-time data provides the most current intelligence to manage the fleet and delivery, for example. Strategically, with meaningful real-time data, systemic issues are easier to identify, portfolio decisions faster to make, and performance easier to evaluate. At the end of the day, it drives better results in safety, customer satisfaction, the bottom line, and ESG [environmental, social, and governance].” — Helen Yu (@YuHelenYu), Founder and CEO, Tigon Advisory Corp.

“Businesses are facing a rapidly evolving set of threats from supply chain constraints, rising fuel costs, and shipping delays. Taking too much time to make a decision based on stale data can increase overall costs due to changes in fuel prices, availability of inventory, and logistics impacting the shipping and delivery of products. Organizations utilizing real-time data are the best positioned to deal with volatile markets.” — Jason James (@itlinchpin), CIO at Net Health

Build a foundation for continuous improvement

The experts offered several practical examples of how real-time data can help deliver continuous improvement in a variety of areas across the business, with the help of automation, which is a key capability for making data actionable.

“In the process of digital transformation, businesses are moving from human-dependent to digital business processes,” said Nikolay Ganyushkin (nikolaygan), CEO and Co-founder of Acure. “This means that all changes, all transitions, are instantaneous. The control of key parameters and business indicators should also be based on real-time data, otherwise such control will not keep up with the processes.”

Real-time data and automated processes present a powerful combination for improving cybersecurity and resiliency.

“When I was coming up in InfoSec, we could only do vulnerability scanning between midnight and 6 am. We never got good results because systems were either off, or there was just nothing going on at those hours,” said George Gerchow (@georgegerchow), CSO and SVP of IT, Sumo Logic. “Today, we do them at the height of business traffic and can clearly see trends of potential service outages or security incidents.”

Will Kelly (@willkelly), an analyst and writer focused on the cloud and DevOps, said that harnessing real-time data is critical “in a world where delaying business and security decisions can prove even more costly than just a couple of years ago. Tapping into real-time data provides decision-makers with immediate access to actionable intelligence, whether a security alert on an attack in-progress or data on a supply chain issue as it happens.”

Real-time data facilitates timely, relevant, and insightful decisions down to the business unit level, said Gene De Libero (@GeneDeLibero), Chief Strategy Officer at GeekHive.com. Those decisions can have a direct impact on customers. “Companies can uncover and respond to changes in consumer behavior to promote faster and more efficient personalization and customization of customer experiences,” he said.

Deploy an end-to-end approach to storing, accessing, and analyzing data

To access data in real time — and ensure that it provides actionable insights for all stakeholders — organizations should invest in the foundational components that enable more efficient, scalable, and secure data collection, processing, and analysis. These components, including cloud-based databases, data lakes, and data warehouses, artificial intelligence and machine learning (AI/ML) tools, analytics, and internet of things capabilities, must be part of a holistic, end-to-end strategy across the enterprise:

“Real-time data means removing the friction and latency from sourcing data, processing it, and enabling more people to develop smarter insights. Better decisions come from people trusting that the data reflects evolving customer needs and captures an accurate state of operations.” — Isaac Sacolick (@nyike), StarCIO Leader and Author of Digital Trailblazer

“Organizations must use a system that draws information across integrated applications. This is often made simpler if the number of platforms is kept to a minimum. This is the only way to enable a real-time, 360-degree view of everything that is happening across an organization — from customer journeys to the state of finances.” — Sridhar Iyengar (@iSridhar), Managing Director, Zoho Europe

“Streaming processing platforms allow applications to respond to new data events instantaneously. Whether you’re distributing news events, moving just-in-time inventory, or processing clinical test results, the ability to process that data instantly is the power of real-time data.” — Peter B. Nichol (@PeterBNichol), Chief Technology Officer at OROCA Innovations

As your data increases, expand your data-driven capabilities

The volume and types of data organizations collect will continue to increase. Forward-thinking leadership teams will continue to expand their ability to leverage that data in new and different ways to improve business outcomes.

“The power of real-time data is amplified when your organization can enrich data with additional intelligence gathered from the organization,” said Nichol. “Advanced analytics can enhance events with scoring models, expanded business rules, or even new data.”

Nichol offered the example of combining a customer’s call — using an interactive voice response system — with their prior account history to enrich the interaction. “By joining events, we can build intelligent experiences for our customers, all in real time,” he said.

It’s one of the many ways that new technologies are increasing the opportunities to use real-time data to fundamentally change how businesses operate, now and in the future.

“As businesses become increasingly digitalized, the amount of data they have available is only going to increase,” said Iyengar. “We can expect real-time data to have a more significant impact on decision-making processes within leading, forward-thinking organizations as we head deeper into our data-centric future.”

Learn more about ways to put your data to work on the most scalable, trusted, and secure cloud.

Business Intelligence

You may think that the size of your business makes you less vulnerable to fraud attacks, but the opposite can often be the case. Sophisticated fraudsters have a good idea about which businesses have less protection or don’t have a dedicated fraud manager. In particular, they may target what they regard as relatively undefended businesses with card testing attacks. We’ve written about card testing before, but here’s a quick refresh of what it is and how it can affect your business.

What is card testing?

Fraudsters use card testing to determine the validity of stolen or fraudulently obtained card details. They attempt multiple purchases on an e-commerce website like yours (often using a botnet for speed and scale). If a transaction is approved, they know they can use the card. If, on the other hand, a card has already been canceled by its owner, authorization will be declined, and the fraudster will move on to testing the next card.

What is the impact of a card testing attack?

Our risk analysts have found that a card testing attack can negatively affect an unprepared business for several months, causing financial and other losses. Here’s a typical timeline of what you could experience:

Day 1 (attack day)

The fraudster submits potentially thousands of orders, many of which could be approved. Approved orders for physical goods could start to ship, resulting in lost product. Once card issuers become aware of what’s happening, they may ask your acquirer to shut down your ability to process transactions. You’ll need to provide proof of a mitigation strategy before you can restart transaction processing.

Day 2-30

Because the fraudster submitted so many transactions, you may have to pay significant authorization processing fees to your acquirer and payment gateway. For example, your authorization fees could jump from an average of $40 a month to $15,000 a month. To add insult to injury, you won’t earn any revenue on these transactions, either.

Day 31-120

Chargebacks and their associated fees start to roll in because transactions weren’t reversed during the initial attack.

Ongoing

Your business could experience brand and reputational damage and loss of customer trust.

How can I protect my business from card testing?

Unfortunately, once a card testing attack is in progress, there’s little you can do. Your future self will thank you if, instead of reacting to an attack, you take a proactive approach to preventing card testing (and other types of fraud).

No single solution can completely stop fraud, which is why we recommend a multi-layered strategy. Consider combining best practices like risk reviews, minimum payment thresholds, and early identification of anomalies (which we wrote about here) with a range of capable tools.

How Cybersource can help

In addition to following best practices, a fraud management tool is another layer of defense against card testing and other types of fraud.

If you already use Cybersource’s payment platform, consider integrating Fraud Management Essentials to help prevent fraudulent transactions (including card testing) before they get as far as authorization.

Fraud Management Essentials is ready to use and easy to configure. Developed with Cybersource’s expertise and built on Visa’s scale, its powerful features include:

Velocity rules that can track, count, and reject repeated transaction attempts that share common data elements or exceed transaction volume limitsAmount thresholds that can limit transactions to those that are appropriate for your business

Not an expert at managing eCommerce fraud? Don’t worry. To help you get started, Fraud Management Essentials comes with online training modules that you can access anytime.

By combining best practices with fraud and risk tools, you can better protect your business against card testing and other types of fraud and avoid the associated costs and negative impact on your brand.

Learn more about Fraud Management Essentials and Cybersource’s other fraud and risk solutions.

Fraud Protection and Detection Software, IT Leadership

Artificial intelligence (AI) is one?if not the?key technology of our decade. Technological advances in this field are not only fundamentally changing our economies, industries and markets, but are also exerting enormous influence on traditional business practices, many of which will disappear, while others will be transformed or completely reinvented.