In recent months, artificial intelligence has been everyone’s favorite buzzword. Both Silicon Valley startups and Fortune 500 companies see industries revolutionize as AI steadily picks up pace. But excitement, progress, and red flags like AI washing, are developing in equal measure. Some businesses, desperate to get on the gravy train, want to cash in on the hype, so they overstate their AI capabilities despite the fact that, in reality, the AI they employ is minimal or nonexistent.

This questionable marketing strategy can help them receive larger seed, A, and B funding rounds compared to non-AI startups. Last year alone, AI startups raised more than $50 billion in venture capital funding, according to GlobalData, and the numbers are expected to grow this year given the frenzy surrounding ChatGPT and others. 

Given the capital poured into these startups, the AI washing phenomenon will only grow in intensity. The US Federal Trade Commission is fully aware of the danger, and warns vendors to be transparent and honest when advertising their AI capabilities. 

“Some products with AI claims might not even work as advertised in the first place,” attorney Michael Atleson, FTC division of advertising practices, wrote in a blog post. “In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.”

In this complex landscape, it can be difficult to distinguish between legitimate AI solutions and marketing gimmicks.

“Companies need to apply a healthy dose of skepticism when faced with vendor claims about their AI products,” says Beena Ammanath, executive director of the Deloitte Global AI Institute. “As with anything, if it sounds too good to be true, it very likely is.”

If CIOs and their companies don’t find the correct answers, they can face consequences that include failed or late projects, financial losses, legal cases, reputational risk, and, ultimately, getting fired, says Donald Welch, CIO at New York University. “I’ve seen executives fired, and I can’t say it was the wrong decision.”

Fortunately, there are several strategies they can use to avoid mistakes.

AI-powered businesses need skilled employees

Vetting businesses that claim to use AI can be a long and time-consuming process. However, simple things, such as performing a LinkedIn search, could uncover valuable insights into an organization’s profile.

“Examine the level of AI experience and education that the vendors’ employees have,” says Ammanath. “Companies that are developing AI solutions should have the talent to do so, meaning they have data scientists and data engineers with deep experience in AI, machine learning, algorithm development, and more.”

In addition to examining employees, CIOs could also look for evidence of collaboration with external AI experts and research institutions. This category includes partnerships with universities, participation in industry conferences and events, and contributions to open-source AI initiatives.

It’s also a good sign if that vendor has experience with similar projects or applications since it shows it can deliver quality results.

“Carefully check the history of the supplier,” says Vira Tkachenko, chief technology and innovation officer at Ukrainian-American startup MacPaw. “If a company is an AI expert, it most likely has a history of research papers in this field or other AI products.”

Look for a well-crafted data strategy

Companies that truly integrate AI into their products also need a well thought out data strategy because AI algorithms need it. They need to work with high-quality data, and the more generous and relevant that data is, the better the results will be. 

“AI systems are fueled by very large amounts of data, so these companies should also have a well-constructed data strategy and be able to explain how much data is being collected and from which sources,” Ammanath says.

Another thing to look at is whether these companies put enough effort into complying with regulatory requirements, and maintain high data privacy and security standards. With the rise of data privacy regulations such as the General Data Protection Regulation (EU GDPR) and the California Consumer Privacy Act (CCPA), organizations have to be transparent about their data practices and provide individuals with control over their personal data. If this doesn’t happen, it should be a red flag.

Request evidence to back the claims

While buzzwords can be seductive, it helps to gently ask for evidence. “Asking the right questions and demanding proof of product claims is critically important to peel away the marketing and sales-speak to determine if a product is truly powered by AI,” Ammanath says.

CIOs who evaluate a specific product or service that appears to be AI-powered can ask how the model was trained, what algorithms were used, and how the AI system will adapt to new data.

“You should ask the vendor what libraries or AI models they use,” says Tkachenko. “They may have just everything built on a simple OpenAI API call.”

Matthias Roeser, partner and global leader of technology at management and technology consulting firm BearingPoint, agrees. He adds that components and framework should be thoroughly understood, and the assessment should include “ethics, biases, feasibility, intellectual property, and sustainability.”

This inquiry could help CIOs learn more about the true capabilities and the limitations of that product, thereby helping them decide whether to purchase it or not. 

Pay attention to startups

Startups position themselves at the forefront of innovation. However, while many of them push the boundaries of what’s possible in the field of AI, some may simply exaggerate their capabilities to gain attention and money.

“As a CTO of a machine learning company myself, I often encounter cases of AI washing, especially in the startup community,” says Vlad Pranskevičius, co-founder and CTO of Ukrainian-American startup Claid.ai by Let’s Enhance. He noticed, though, that recently the situation has become more acute, adding that this phenomenon is especially dangerous during hype cycles like the one currently being experienced, as AI is perceived as a new gold rush.

Pranskevičius believes, though, that AI washing will be kept in check in the near future as regulations around AI become more stringent.

Build a tech professional reputation

It’s not uncommon for a company to acquire dubious AI solutions, and in such situations, the CIO may not necessarily be at fault. It could be “a symptom of poor company leadership,” says Welch. “The business falls for marketing hype and overrules the IT team, which is left to pick up the pieces.”

To prevent moments like these, organizations need to foster a collaborative culture in which the opinion of tech professionals is valued and their arguments are listed thoroughly. 

At the same time, CIOs and tech teams should build their reputation within the company so their opinion is more easily incorporated into decision-making processes. To achieve that, they should demonstrate expertise, professionalism, and soft skills.

“I don’t feel there’s a problem with detecting AI washing for the CIO,” says Max Kovtun, chief innovation officer at Sigma Software Group. “The bigger problem might be the push from business stakeholders or entrepreneurs to use AI in any form because they want to look innovative and cutting edge. So the right question would be how not to become an AI washer under the pressure of entrepreneurship.”

Go beyond the buzzwords

When comparing products and services, it’s essential to evaluate them with an open mind, looking at their attributes thoroughly. 

“If the only advantage a product or service has for you is AI, you should think carefully before subscribing,” Tkachenko says. “It’s better to study its value proposition and features and only start cooperation when you understand the program’s benefits beyond AI.”

Welch agrees: “Am I going to buy a system because they wrote it in C, C++, or Java?” he asks. “I might want to understand that as part of my due diligence on whether they’re going to be able to maintain the code, company viability, and things like that.”

Doing a thorough evaluation may help organizations determine whether the product or service they plan on purchasing aligns with their objectives and has the potential to provide the expected results. 

“The more complex the technology, the harder it is for non-specialists to understand it to the extent it enables you to verify that the application of that technology is correct and makes sense,” Kovtun says. “If you’ve decided to utilize AI tech for your company, you better onboard knowledgeable specialists with experience in the AI domain. Otherwise, your efforts might not result in the benefits you expect to receive.”

Follow AI-related news

Being up to date on AI-related products and the issues surrounding them can help CIOs make informed decisions as well. This way, they can identify potential mistakes they could make and, at the same time, leverage new ideas and technologies.

“I don’t think there’s enough education yet,” says Art Thompson, CIO at the City of Detroit. 

He recommends CIOs do enough research to avoid falling into a trap with new or experimental technology that promises more than it can deliver. If that happens, “the amount of time to rebid and sort out replacing a product can really harm staff from being able to get behind any change,” he says. “Not to mention the difficulty in people investing time to learn new technologies.”

In addition, being informed on the latest AI-related matters can help CIOs anticipate regulatory changes and emerging industry standards, which can help them be compliant and maintain a competitive edge.

And it’s more than just the CIO who needs to stay up to date. “Educate your team or hire experts to add the relevant capabilities to your portfolio,” says BearingPoint’s Roeser.

Additional regulatory action around AI

New regulations on the way could simplify the task of CIOs seeking to determine whether a product or service employs real AI technology or not. The White House recently issued an AI Bill of Rights with guidelines for designing AI systems responsibly. And more regulations might be issued in the coming years.

“The premise behind these actions is to protect consumer rights and humans from potential harm from technology,” Ammanath says. “We need to anticipate the potential negative impacts of technology in order to mitigate risks.”

Ethics shouldn’t be an afterthought

Corporations tend to influence the discourse on new technology, highlighting the potential benefits while often downplaying the potential negative consequences.

“When a technology becomes a buzzword, we tend to lose focus on the potentially harmful impacts it can have in society,” says Philip Di Salvo, a post-doctoral researcher at the University of St. Gallen in Switzerland. “Research shows that corporations are driving the discourse around AI, and that techno-deterministic arguments are still dominant.”

This belief that tech is the main driving force behind social and cultural change can obscure discussions around ethical and political implications in favor of more marketing-oriented arguments. As Di Salvo puts it, this creates “a form of argumentative fog that makes these technologies and their producers even more obscure and non-accountable.”

To address this, he says there’s a crucial challenge to communicate to the public what AI actually isn’t and what it can’t do.

“Most AI applications we see today — including ChatGPT — are basically constructed around the application of statistics and data analysis at scale,” says Di Salvo. “This may sound like a boring definition, but it helps to avoid any misrepresentation of what ‘intelligent’ refers to in the ‘artificial intelligence’ definition. We need to focus on real problems such as biases, social sorting, and other issues, not hypothetical, speculative long-terminist scenarios.”

Artificial Intelligence, CIO, IT Leadership, Vendor Management, Vendors and Providers

Work has changed dramatically thanks to the global COVID pandemic. Workers across every market sector in Australia are now spending their workdays alternating between offices and other locations such as their homes. It’s a hybrid work model that is certainly here to stay.

But moving workers outside the network perimeter presents cyber security challenges for every organisation. It provides an expanded attack surface as enterprises ramp up their use of cloud services and enable staff to access key systems and applications from just about anywhere.  

Senior technology leaders gathered in Melbourne recently to discuss the cyber security implications of a more permanently distributed workforce as their organisations move more services to the cloud. The conversation was sponsored by Palo Alto Networks.

Sean Duca, vice-president, regional chief security officer, Asia-Pacific & Japan at Palo Alto Networks, says with the primary focus now on safety and securely delivering work to staff, irrespective of where they are, organisations need to think about where data resides, how it is protected, who has access to it and how it is accessed.

“With many applications consumed ‘as a service’ or running outside the traditional network perimeter, the need to do access, authorisation and inspection is paramount,” Duca says.

“Attackers target the employee’s laptops and applications they use, which means we need to inspect the traffic for each application. The attack surface will continue to grow and also be a target for cybercriminals. This means that we must stay vigilant and have the ability to continuously identify when changes to our workforce happen, while watching our cloud estates at all times,” he says.

Brenden Smyth from Palo Alto Networks adds the main impact of this more flexible workforce on organisations is that they no longer have one or two points of entry that are well controlled and managed.

“Since 2020, organisations have created many hundreds if not tens of thousands of points of entry with the forced introduction of remote working,” he says.

“On top of that, company boards need to consider the personal and financial impacts [of a breach] that they are responsible for in the business they run. They need to make sure users are protected within the office, as well as those users connecting from any location,” he says.

Gus D’Onofrio, chief information technology officer at the United Workers Union, believes that there will come a time when physical devices will be distributed among the workforce to ensure their secure connectivity.

“This will be the new standard,” he says.

Iain Lyon, executive director, information technology at IFM Investors, says the key to securing distributed workforces is to ensure the home environment is suitably secure so the employee can do the work they need to do.

“It may be that for certain classifications of data or user activity, we will need to set up additional technology in the home to ensure compliance with security policy. That challenge is both technical and requires careful human resource thought,” he says.

Meeting the demands of remote workers

During the discussion, attendees were asked if security capabilities are adequate to meet the new demands of connecting remote workers to onsite premises, infrastructure-as-a-service and software-as-a-service applications.

Palo Alto Networks’ Duca says existing cyber capabilities are only adequate if they do more than connectivity (access and authorisation).

“It’s analogous to an airport; we check where passengers go based on their ID and boarding pass and inspect their person and belongings. If the crown jewel in an airport is the planes, we do everything to protect what and who gets on.

“Why should organisations do anything less?” he asks. “If you can’t do continuous validation and enforcement, what is the security efficacy of the security capability?”

Meanwhile, Suhel Khan, data practice manager at superannuation organisation, Cbus, adds that distributed workforces need stronger perimeter security and edge security systems, fine-grained ‘joiner-mover-leaver’ access control and entitlements, as well as geography-sensitive content management and distribution paradigms.

“We have reached a certain baseline in regard to the cyber security capabilities that are available in the market. The bigger challenge is procuring and integrating the right suite of applications that work across respective ecosystems,” he says.

Held back by legacy systems

Many enterprises are still running legacy systems and applications that can’t meet the demands of a borderless workforce.

Palo Alto Networks’ Smyth says cyber impacts of sticking with older systems and applications are endless.

“Directly connected to SaaS and IaaS apps without security, patch management, vendor support – the list goes on – means organisations will not have full control of their environment,” he says.

Duca adds that organisations running legacy platforms could see an impact on productivity from their employees, and the solution may not be able to deal with modern-day threats.

“Every organisation should use this as a point in time to reassess and rearchitect what the world looks like today and what it may look like tomorrow. In a dynamic and ever-changing world, businesses should look to a software-driven model as it will allow them to pivot and change according to their needs,” he says.

Cbus has challenges around optimally integrating software suites for end-to-end seamless process flow, like most enterprises that have built technical systems for core business functions over the past 10 years, says Cbus’ Khan.

“There are several app modernisation transformation programs to help us move forward. I believe that there will always be ‘heritage systems’ to take care of and transition away from.

“The only difference is that in the near future, these older systems will be built on the cloud rather than [run] on-premise and we would be replacing such cloud-native legacy applications with autonomous intelligent apps,” Khan says.

Meanwhile, IFM Investor’s Lyon says that like very firm, IFM has several key applications that are mature and do an excellent job.

“We are not being held back. Our use of the Citrix platform to encapsulate the stable and resilient core applications has allowed us to be agnostic to the borderless nature of work,” he says.

Centralising security in the cloud

The advent of secure access service edge (SASE) and SD-WAN technologies has seen many organisations centralise security services in the cloud rather than keep them at remote sites.

Palo Alto Networks’ Duca says that for many years, gaps will continue to appear from inconsistent policies and enforcement. With the majority of apps and data that sit in the cloud, centralising cyber services allows for consistent security close to the crown jewels.

“There’s no point sending the traffic back to the corporate HQ to send it back out again,” he says.

The decision about whether or not to centralise security services in the cloud or keep them at remote sites is based on the risk appetite of the organisation.

“In superannuation, a good proportion of cyber security programs are geared towards being compliant and dealing with threats due to an uncertain global political outlook. Organisations that can afford to run their own backup/failsafe system on premise should consider [moving this function] to the cloud. Cloud-first is the dominant approach in a very dynamic market,” he says.

United Workers Union’s D’Onofrio, adds that the pros of centralising security services at remote sites are faster access and response times, which is ideal for geographically distributed workforces and customer bases. A con, he says, is that a distributed footprint implies stretched security domains.

On the flipside, security domains are easier to manage if they are centralised in the cloud but will deliver slower response times for customers and staff who are based geographically afar, he says.

Cyberattacks