Artificial intelligence (AI) in 2023 feels a bit like déjà vu to me. Back in 2001, as I was just entering the venture industry, I remember the typical VC reaction to a start-up pitch was, “Can’t Microsoft replicate your product with 20 people and a few months of effort, given the resources they have?” Today, any time a new company is pitching its product that uses AI to do ‘X,’ the VC industry asks, “Can’t ChatGPT do that?”

Twenty-two years later, Microsoft is at the table once again. This time they’re making a $13 billion bet by partnering with OpenAI and bringing to market new products like Security Copilot to make sense of the threat landscape using the recently launched text-generating GPT-4 (more on that below). But just as Microsoft did not inhibit the success of thousands of software start-ups in the early 2000s, I do not expect Microsoft or any vendor to own this new AI-enabled market. 

However, the market explosion and hype around AI across the business and investment spectrum over the past few months has led people to ask: what are we to make of it all? And more specifically, how do CIOs, CSOs, and cybersecurity teams learn to deal with technology that may pose serious security and privacy risks?

The good, the bad, and the scary

I look at the good, the bad, and the scary of this recent Microsoft announcement. What’s incredible about ChatGPT and its offspring is that it brings an accessible level of functionality to the masses. It’s versatile, easy to use, and usually produces solid results.

Traditionally, organizations have needed sophisticated, trained analysts to sort through, analyze, and run processes for their security data. This required knowledge of particular query languages and configurations relevant to each product, like Splunk, Elastic, Palo Alto/Demisto, and QRadar. It was a difficult task, and the available talent pool was never enough.   

That difficulty in SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) still exists today. SIEM helps enterprises collect and analyze security-related data from servers, applications, and network devices. The data is analyzed to identify potential security threats, alert security teams to suspicious activity, and provide insights into a company’s security defenses. SIEM systems typically use advanced analytics to identify patterns, anomalies, and other indicators of potential threats.

SOAR builds on SIM capabilities by automating security workflows and helping businesses respond more quickly and efficiently to security incidents. SOAR platforms can integrate with various security products, including enterprise firewalls, intrusion detection systems, and vulnerability scanners. SIEM/SOAR is where you orchestrate action for an incident response plan. Using those actions helps in the remediation process. Managing the process and products involved in remediation is difficult.

Now, Microsoft is putting a stake in the ground with its generative AI Security Copilot tool. With Security Copilot, the tech company is looking to boost the capability of its data security products for deep integrated analysis and responses. By integrating GPT-4 into Security Copilot, Microsoft hopes to work with companies to

more easily identify malicious activity;

summarize and make sense of threat intelligence;

gather data on various attack incidents by prioritizing the type and level of incidents; and

recommend to clients how to remove and remediate diverse threats in real-time.

And guess what? Theoretically, it should be easier to sort through all that data using GPT APIs and other tools or figure out how to leverage these on incident data. These systems should also make more automated response and orchestration much simpler.

Overall, the emergence of GPT-4 may be a step towards the industry’s dream of “Moneyball for cyber,” allowing for a more robust defensive posture by leveraging the experience and wisdom of the crowds. And it will allow for a stronger defense of smaller organizations that do not have sufficient resources and expertise today.

It’s all about trust

However, there are still significant obstacles to overcome regarding adoption and trust. First and foremost, there is still reluctance among many organizations to share their incident data with others, even if de-identified, as it could potentially lead to leaked information, bad press, and brand damage. Sharing has been talked about for years, but is rarely done in a systematic, or technology-delivered manner for these reasons. The best sharing practices followed today are industry CISOs talking amongst their tight peer group when something significant occurs. Thus, given the reluctance to share in any meaningful way previously, I suspect that the industry will take a long time to put their data in this or any third-party platform for fear that it exposes them in some way.

Another hurdle is overcoming hesitancy about privacy and security concerns. Microsoft claims that integrating data into its systems will maintain privacy and security. Security Copilot will not train on nor learn from their customers’ incident or vulnerability data. However, without full transparency, the market will have lingering doubts. Users may fear that attackers may use the same GPT-based platform to develop attacks that target the vulnerabilities in their systems that it has become aware of, no matter what the ELA states to the contrary.  Wouldn’t an attacker love to ask, “Write an exploit that allows me to navigate the defenses at Corporation X?”

There is also a question about how the system can learn from the newest attacks if it is not training on the data from customer organizations. The system would be more powerful if it did learn in the wild from customer incident and vulnerability data.

Even without specific details learned from any one customer, assuming full transparency on security and privacy is guaranteed, given the wide aperture of knowledge that can be obtained from other public and non-public sources, won’t this AI-based system become an adversary’s favorite exploit development tool?

Given all of this, there are potential risks and rewards involved in using ChatGPT in cybersecurity.

Microsoft has major ambitions for Security Copilot. It’s a tall order to fill, and I hope they get it right for everyone’s sake.

Know the potential consequences

GPT-4 under Microsoft auspices might be a great tool if it figures out ways to cut off all that potentially harmful activity. If it can train the system to focus on the positive and do it so that proprietary internal data is not compromised, it would be a potent tool for mainstream analysis of security incidents and security. To date, this has only been done with very sophisticated, high-priced people and complex systems that cater to the higher end of the market.

But suppose the mid-tier companies, who can’t afford top-quality cybersecurity resources or the best data security teams, choose to open up their data to Microsoft and GPT-4? In that case, I just hope they know there may be possible side effects. Caveat emptor!

Artificial Intelligence, Data and Information Security, Security

If you believe the hype, generative AI has the potential to transform how we work and play with digital technologies.

Today’s eye-popping text-and-image generating classes of AI capture most of the limelight, but this newfangled automation is also coming to software development.

It is too soon to say what impact this emerging class of code-generating AI will have on the digital world. Descriptors ranging from “significant” to “profound” are regularly tossed around.

What we do know: IT must take a more proactive role in supporting software developers as they begin to experiment with these emergent technologies.

Generative AI Could Change the Game

Many generative AI coding tools have come to the fore, but perhaps none possesses more pedigree than Copilot, developed by Microsoft’s GitHub coding project management hub.

A type of virtual assistant, Copilot uses machine learning to recommend the next line of code a programmer might write. Just as OpenAI’s ChatGPT gathers reams of text from large corpuses of Web content, Copilot takes its bits and bytes insights from the large body of software projects stored on GitHub.

Although it’s early days for such tools, developers are excited about Copilot’s potential for enhanced workflows, productivity gained and time saved. Empirical and anecdotal evidence suggests it can shave anywhere between 10% and 55% of time coding, depending on who you listen to.

Today Copilot is targeted at professional programmers who have mastered GitHub and committed countless hours to creating and poring over code. Yet it’s quite possible that Copilot and other tools like it will follow the money and migrate downstream to accommodate so-called citizen developers.

DIY AI, for Non-Coders

Typically sitting in a business function such as sales or marketing, citizen-developers (cit-devs)  are non-professional programmers who use low-code or no-code software to build field service, market and analytics apps through drag-and-drop interfaces rather than via the rigors of traditional hand-coding.

If the low-code/no-code evolution has come to your company, you may have marveled at how this capability freed your staff to focus on other tasks, even as you helped these erstwhile developers color within the governance lines.

Considering their all-around efficacy, self-service, do-it-yourself tools are in-demand: The market for low-code and no-code platforms is poised to top $27 billion market in 2023, according to Gartner.

Now imagine what organizations will pony up for similar tools that harness AI to strap rocket boosters onto software development for non-techie coders. In the interest of catering to these staff, GitHub, OpenAI and others will likely create versions of their coding assistants that streamline development for cit-devs. GitHub, for example, is adding voice and chat interfaces to simplify its UX even more.

It’s not hard to imagine where it goes from there. Just as the API economy fostered new ecosystems of software interoperability, generative AI plugins will facilitate more intelligent information services for big brands. Already OpenAI plugins are connecting ChatGPT to third-party applications, enabling the conversational AI to interact with APIs defined by developers.

One imagines this AI-styled plug-and-play will broaden the potential for developers, both of the casual and professional persuasion. Workers will copilot coding tasks alongside generative AI, ideally enhancing their workflows. This emerging class of content creation tools will foster exciting use cases and innovation while affording your developers teams with more options for how they execute their work. This will also mean development will continue to become more decentralized outside the realm of IT.

Keep an Open Mind for the Future

The coming convergence of generative AI and software development will have broad implications and pose new challenges for your IT organization.

As an IT leader, you will have to strike the balance between your human coders—be they professionals or cit-devs—and their digital coworkers to ensure optimal productivity. You must provide your staff  guidance and guardrails that are typical of organizations adopting new and experimental AI.

Use good judgment. Don’t enter proprietary or otherwise corporate information and assets into these tools.

Make sure the output aligns with the input, which will require understanding of what you hope to achieve. This step, aimed at pro programmers with knowledge of garbage in/garbage out practices, will help catch some of the pitfalls associated with new technologies.

When in doubt give IT a shout.

Or however you choose to lay down the law on responsible AI use. Regardless of your stance, the rise of generative AI underscores how software is poised for its biggest evolution since the digital Wild West known as Web 2.0.

No one knows what the generative AI landscape will look like a few months from now let alone how it will impact businesses worldwide.

Is your IT house in order? Are you prepared to shepherd your organization through this exciting but uncertain future?

Learn more about our Dell Technologies APEX portfolio of cloud experiences, which affords developers more options for how and where to run workloads while meeting corporate safeguards: Dell Technologies APEX

Business Intelligence and Analytics Software

Salesforce’s business intelligence platform, Tableau, is getting generative AI features  in the form of Tableau GPT, built on the company’s proprietary Einstein GPT AI engine, which has  also been integrated into other products such as Slack.

“Tableau GPT can enhance and automate things like analyzing data, exploring it, sharing it, consuming it. The generative AI engine introduces a number of really exciting use cases where for example, analyzing data feels more like a conversation via a chatbot as opposed to drag and drop,” said Pedro Arellano, head of product at Tableau.

“Other use cases include the engine anticipating questions that users might ask based on what’s already in the data or taking hundreds of insights and explaining them using very easy to understand summaries,” Arellano said.

Einstein GPT, the foundation for Tableau GPT, comprises various large language models (LLMs) including those from OpenAI, Cohere, and internal, proprietary Salesforce models, noted Sanjeev Mohan, principal analyst at independent consulting firm SanjMo.

These internal models were  driven by Salesforce’s investments in companies with natural language processing abilities, and insights about how enterprises conduct data analytics, according to Amalgam Insights principal analyst Hyoun Park.

“Tableau previously acquired Narrative Science, a natural language generation solution for analytics. In addition, Salesforce has made strong investments in data science over the years such as BeyondCore, Metamind, and Datorama and has hundreds of data scientists in house as well,” Park said.

In addition, Tableau GPT has been given a data security and governance layer in order to protect enterprise data from internal and external data leakages or unauthorized access, according to Arellano.

The addition of the governance and security can be attributed to Salesforce’s effort to build trust among customers, especially at a time when companies are banning the use of OpenAI’s ChatGPT over data leak concerns, analysts said.  

“These layers protect users who are afraid that their prompts will be used to retrain LLMs. Also, it can guard against LLM hallucinations,” SanjMo’s Mohan said.  

Tableau GPT is expected to be available in pilot later this year, the company said.

Proactive data analytics with Tableau Pulse

Salesforce has also released a new flavor of data analytics under an offering dubbed Tableau Pulse, which the company said offers proactive analytics.

“It is sort of a personal guide for your data, where it knows your data. It knows the goals you’re trying to achieve with your data. And it helps you reach those goals,” Arellano said.

Tableau Pulse will also use Tableau GPT to help enterprise users make better, faster decisions using automated analytics on personalized metrics in an “easy-to-understand way,” Arellano  said, adding that that Pulse can surface insights in both natural language and visual formats.

Use cases include alerts when there is an unusual change in data or metrics, and help for users to drill down to the reason for the anomaly, the company said.

These insights can be further shared with colleagues via collaboration platforms such as Jira or Slack in order to find a resolution, Salesforce added.

“The automatic nature of the analyses provided by Pulse increases productivity but also introduces consistency and comprehensiveness since the same analytics are applied wherever necessary,” said David Menninger, research director at Ventana Research.

However, Tableau might be playing catch up with other vendors, analysts said.

“A number of vendors have developed and are refining ways to look at the graph of individual and user behaviors and interactions with data and then glean insights and make recommendations based on changes,” said Doug Henschen, principal analyst at Constellation Research.

Cloud-based products, according to Henschen, tend to have a leg up in analyzing user behaviors and data interactions at scale.

“Products that started out as server-based products, like Tableau, have typically taken longer to develop graph and personalization capabilities that can be delivered consistently across the both cloud and on-premises deployments,” Henschen said.  

Though many vendors offer automated insights, the addition of generative AI-produced narratives “will help make these insights more complete and more easily delivered in multiple languages,” Ventana’s Menninger said.

Tableau Pulse is expected to be available in pilot later this year, the company said.

Data Cloud for Tableau to unify data for analytics

In addition to Tableau Pulse, Salesforce is offering Data Cloud for Tableau to unify enterprises’ data for analytics.

The plan is to layer Tableau on top of the Data Cloud, which was released last year in September at Dreamforce under the name “Genie.”

“With Tableau, all of a company’s customer data can be visualized to help users explore and find insights more easily. Data Cloud also supports zero-copy data sharing, which means that users can virtualize Data Cloud data in other databases, making it instantly available to anyone,” the company said in a statement.

Data Cloud for Tableau will also come with data querying capabilities, the company added.

There are many business advantages that Data Cloud for Tableau can provide, according to Henschen.

“Advantages include bringing together all your disparate data, separating compute and storage decisions, and enabling many types of analysis and many different use cases against the data cloud without replication and redundant copies of data,” Henschen said.

Salesforce’s move to combine its Data Cloud with Tableau can be attributed to Tableau having reaching a ceiling in its core analytic discovery capabilities, according to Park.

“It is being pressured to increasingly support larger analytics use cases that push into data management and data warehousing. Although Tableau is not going to be a full-fledged data warehouse, it does want to be a source of master data where analytic data is accessed,” Park said.

Data Cloud for Tableau, however, is part of a strategy to  compete with data lakehouse, data warehouse vendors, and an effort to own or control more data, Menninger said. The integration of Tableau and Data Cloud will lead to direct competition with the likes of Qlik, Tibco IBM, Oracle, and SAP, analysts said.

Data Cloud for Tableau is expected to be made available later this year.

Other updates includes a new developer capability, dubbed VizQL (visual query language) Data Service, that allows enterprise users to embed Tableau anywhere into an automated business workflow.

“VizQL Data Service is a layer that will sit on top of published data sources and existing models and allows developers to build composable data products with a simple programming interface,” the company said.

Salesforce woos new users with Tableau generative AI

Generally, the addition of generative AI features to Tableau can be seen as an attempt to attract customers who are not analytics or data experts. Business intelligence suites face a problem of adoption as at least 35% of employees are not willing to learn about analytics or data structures, Park said.

“To get past that, analytics needs a fundamentally different user interface. This combination of a natural language processing, natural language generation, generative AI, and jargon-free inputs that translate standard language into data relationships provides that user interface,” Park added.

Another reason why the new features could attract customers is the disinterest of business users in using dashboards. “These users would rather use natural language which has context. Up until now, NLP was very difficult for computers to handle but the new LLMs changed that,” Mohan added.

Business Intelligence, Enterprise Applications

Generative artificial intelligence (GenAI) tools such as Azure OpenAI have been drawing attention in recent months, and there is widespread consensus that these technologies can significantly transform the retail industry. The most well-known GenAI application is ChatGPT, an AI agent that can generate a human-like conversational response to a query. Other well-known GenAI applications can generate narrative text to summarize or query large volumes of data, generate images and video in response to descriptive phrases, or even generate complex code based on natural language questions.

GenAI technologies offer significant potential benefits for retail organizations, including speedy price adjustments, customized behavior-based incentives, and personalized recommendations in response to searches and customer preferences. These technologies can create new written, visual, and auditory content based on natural language prompts or existing data. Their advanced analytic capabilities can help determine better locations for new stores or where to target new investments. Generative AI chatbots can provide faster, more relevant customer assistance leading to increased customer satisfaction and in some cases, reduced costs and customer churn. To gain a deeper understanding of how retail organizations can benefit from Generative AI applications, we spoke with James Caton, Practice Leader, Data and Artificial Intelligence, at Microsoft, and Girish Phadke, Technology Head, Microsoft and Cloud Platforms, at Tata Consultancy Services (TCS). James and Girish discussed three ways Generative AI is transforming retail: speeding innovation, creating a better customer experience, and driving growth.

How can Generative AI speed innovation in retail?

James Caton: We’re already seeing a lot of data-driven innovation in the industry. Microsoft Azure OpenAI Service, which provides access to OpenAI’s large language models, allows more probing and deep questioning of data. A frontline worker could have the ability to “chat with their data,” to conversationally query inventory or shipping options for example, see the response in a chart, and ask for trend analysis and deeper insights.

It essentially gives you an assistant or a Copilot to help do your job. Imagine having several assistants that are parsing the data, querying the data, and bringing data reports and visual graphs back to you. And you can send the copilot back and say, “please look here,” and “I want more information there.” As a retail sales manager, OpenAI will allow you to develop more innovative solutions, more tailored strategies, and more personalized experiences.

How does Generative AI’s conversational flow enable a more compelling customer experience?

Girish Phadke: Existing call center tools can be conversational, and they do have access to 360-degree customer views, but there is a limit in terms of how far back they can go and what kind of data they can process to answer the customer’s query.

The new Generative AI models can go deeper into historical information, summarize it, and then present it in a human-like conversation. These models can pull data from multiple interactions and sources, from a huge amount of information, and create a response that is the best fit to answer a particular customer’s question. Essentially, tailoring the answer not only based on a massive knowledge base of data, but also on the individual customer’s preferences.

Can you share an example of how one of your customers has benefited from using OpenAI to process and analyze vast amounts of information?

Caton: CarMax reviews millions of vehicles. The challenge for new buyers was there were too many reviews, and they could not get a good sense for why people liked or disliked a certain vehicle. CarMax used the Azure OpenAI Service to analyze millions of reviews and present a summary. If a customer was looking at a certain make and model, the Azure OpenAI service summarized the reviews and presented the top three reasons people liked it and the top three reasons they disliked it. The technology summarized millions of comments, so that customers didn’t have to, thus improving the customer experience and satisfaction.

Are there steps that retailers can take to get ready for OpenAI and similar tools?

Caton: If a retailer wants to take advantage of these capabilities, the first thing they need to do is move their data to the Microsoft Cloud. Then, partners like TCS can help them develop their preferred use case, such as applying Generative AI to inventory or sales data or helping develop more tailored marketing campaigns. TCS knows the industry as well as most retailers. They understand the technology, how to manage and migrate data, and how to optimize to make best use of the new capabilities.

Phadke: We understand this is a new technology; retailers are likely to be cautious. They can start by augmenting existing capabilities, such as with more comprehensive Azure ChatGPT, and adjust the governance models as they learn more about their data and processes. As confidence grows, they can begin to automate the larger deployment mechanism.

How long does it typically take for an organization to see a return on investment from Generative AI?

Phadke: With the right strategy and right set of use cases, a system can start generating a positive ROI very quickly. TCS offers a six-week discovery assessment to help with ideation and strategy development. Within 12 to 16 weeks of adopting Azure OpenAI Service, an organization can have a more scaled-out implementation.

Do retail organizations have to embrace Generative AI technologies right now if they want to be able to compete?

Phadke: I think if some retailers choose to ignore this technology, they risk falling behind. Earlier adopters might get a competitive advantage. This technology is disruptive in nature and will have a significant impact on many industries, including retail.

Caton: OpenAI is the fastest application to hit 100 million users —faster than Facebook, Instagram, or WhatsApp. The risk for slow adopters is that their competitors are adopting it and might gain a competitive advantage. It is being adopted very widely, very quickly.

Learn how to master your cloud transformation journey with TCS and Microsoft Cloud.

TCS

Girish Phadke, Technology Head, Microsoft and Cloud Platforms, TCS
Girish Phadke leads Edge to Cloud Solutions, AI, and Innovation focus areas within the TCS Microsoft Business Unit. He provides advisory to customers on next generation architectures and business solutions. He tracks and incubates new technologies through TCS Microsoft Business Unit Innovation hubs across the globe. Girish is based out of Mumbai, India, and in his free time loves watching science fiction movies.
https://www.linkedin.com/in/ girish-phadke-ab25034/

Microsoft

James Caton, Practice Leader, Data & Artificial Intelligence, Microsoft James Caton serves as an AI Practice Leader at Microsoft, helping global system integrators build sustainable Azure Artificial Intelligence businesses. He has held technical and commercial leadership positions at software companies SAS and IBM, as well as with Larsen & Toubro Construction where he led their India Smart Cities business. James lives in Ave Maria, Florida with his wife and three daughters. https://www.linkedin.com/in/jmcaton/

Artificial Intelligence, Retail Industry

Data velocity – how quickly data is generated and moved – is the key to achieving any number of business outcomes. But it’s especially important in customer experience, according to IDC’s Marci Maddox, Research Vice President Digital Experience Strategies, and Aly Pinder, Research Vice President Aftermarket Services Strategies.

“We’re finding that the customer experience is the first or second priority for digital leaders today,” Maddox said during a recent Foundry webinar with Pinder.

Being able to act quickly with data could be one use for generative AI and other emerging tech. Maddox will share insights into how organizations can use these tools at FutureIT Washington, D.C., on May 11 at the Convene conference center in Arlington, Va.

“We need to look at ways the customer experience is more than just a marketing activity, but it becomes everyone’s focus as we move to a more empathetic customer experience and customer-centric organization,” Maddox said.

In addition to Maddox’s talk, the event will focus on leadership, technology, and personal development with speakers from Boeing, Momentous Capital, National Association of Corporate Directors, and more.

Learn more: Register here for FutureIT Washington, D.C. Not in the area? Join us at an upcoming program in Toronto, Chicago, New York or Southern California.

Artificial Intelligence, Emerging Technology, Events

Amazon’s cloud computing division, AWS, is shifting its focus towards large language models (LLMs) and generative AI-based offerings as it continues to see a downward spiral in overall revenue growth.

Amazon Web Services (AWS) has posted 16% year-on-year growth for the first quarter of fiscal year 2023 on the back of revenue of $21.4 billion. However, this revenue growth is slower compared to the 20%, 27.5%, and 33% growth seen in the fourth quarter, third quarter, and second quarter of 2022, respectively. 

The slowdown in growth, according to top executives of the company, can be attributed to enterprises optimizing cloud spend due to uncertain macroeconomic conditions.

“Given the ongoing economic uncertainty, customers of all sizes in all industries continue to look for cost savings across their businesses, similar to what you’ve seen us doing at Amazon. As expected, customers continue to evaluate ways to optimize their cloud spending in response to these tough economic conditions in the first quarter,” Brian Olsavsky, chief financial officer at AWS, said during an earnings call.

“We are seeing these optimizations continue into the second quarter with April revenue growth rates about 500 basis points lower than what we saw in Q1,” Olsavsky added.

In response to the trend, Olsavsky said that AWS’ sales and support teams have continued to spend much of their time helping customers optimize their spending to help them “better weather this uncertain economy.”

However, AWS top executives remained bullish on the growth perspectives of the division, citing the opportunity around the conversion of on-premises workloads.

“The new customer pipeline looks strong. The set of ongoing migrations of workloads to AWS is strong. The product innovation and delivery is rapid and compelling. And people sometimes forget that 90-plus percent of global IT spend is still on-premises,” Amazon CEO Andy Jassy said during the call.

AWS shifts focus to generative AI

In addition, Jassy hinted that the company’s major chunk of cloud business will come from machine learning requirements.

“And in my opinion, few folks appreciate how much new cloud business will happen over the next several years from the pending deluge of machine learning that’s coming,” Jassy said.

The company has already been making capital expenditure adjustments to reroute funds toward the improvement of large language models and generative AI capabilities.

Amazon has been bringing down spending at its fulfillment and transportation divisions year-on-year and has decided to route the savings to AWS to be invested in infrastructure and large language models, Olsavsky said.

AWS’ strategy, according to Jassy, is to target revenue generation by providing compute resources, training capabilities and applications for generative AI and large language models.

“I would say that there’s three macro areas in this space. If you think about maybe the bottom layer here, is that all of the large language models are going to run on compute. And the key to that compute is going to be the chip that’s in that compute,” Jassy said, adding that the company has already launched its Trainium chips and accelerators for memory-intensive tasks, ideal for AI-heavy workloads.

The second layer, according to Jassy, would be to train foundation models, and AWS has just launched its Amazon Bedrock service that provides multiple foundation models designed to allow companies to customize and create their own generative AI applications, including programs for general commercial use.

The third macro area or layer will be offering applications for developers, such as ChatGPT-enabled Copilot from Microsoft-owned GitHub, said Jassy, citing Amazon CodeWhisperer.

“Every single one of our businesses inside Amazon are building on top of large language models to reinvent our customer experiences, and you’ll see it in every single one of our businesses, stores, advertising, devices, entertainment,” the chief executive added.

Other investments in the first quarter by the cloud computing division include a new region in Malaysia and a second region in Australia

Amazon Web Services, Artificial Intelligence, Cloud Computing

In a few short months, generative AI has become a very hot topic. Looking beyond the hype, generative AI is a groundbreaking technology, enabling novel capabilities as it moves rapidly into the enterprise world. 

According to a CRM survey, 67% of IT leaders are prioritizing generative AI for their business within the next year and a half—despite looming concerns about generative AI ethics and responsibility. And 80% of those who think generative AI is “overhyped” still believe the technology will improve customer support, reduce workloads and boost organizational efficiencies.

In the enterprise world, generative AI has arrived (discussed in my previous CIO.com article about enterprises putting generative AI to work here).

Preserving trust

As enterprises race to adopt generative AI and begin to realize its benefits, there is a simultaneous mandate in play. Organizations must proactively mitigate generative AI’s inherent risks, in areas such as ethics, bias, transparency, privacy and regulatory requirements.

Fostering a responsible approach to generative AI implementations enables organizations to preserve trust with customers, employees and stakeholders. Trust is the currency of business. Without it, brands can be damaged as revenues wane and employees leave. And once breached, trust is difficult to regain. 

That’s why preserving trust—before it is broken—is so essential. Here are ways to proactively preserve trust in generative AI implementations.

Mitigating bias and unfairness

Achieving fairness and mitigating bias are essential aspects of responsible AI deployment. Bias can be unintentionally introduced from the AI training data, algorithm and use case. Picture a global retail company using generative AI to personalize promotional offers for customers. The retailer must prevent biased outcomes like offering discounts to specific demographic groups only. 

To do that, the retailer must create diverse and representative data sets, employing advanced techniques for bias detection and mitigation and adopting inclusive design practices. Ongoing, the continuous monitoring and evaluation of AI systems will ensure fairness is maintained throughout their lifecycle.

Establishing transparency and explainability

In addition to mitigating bias and unfairness, transparency and explainability in AI models are vital for establishing trust and ensuring accountability. Consider an insurance company using generative AI to forecast claim amounts for its policyholders. When the policyholders receive the claim amounts, the insurer needs to be able to explain the reasoning behind how they were estimated, making transparency and explainability fundamental.

Due to the complex nature of AI algorithms, achieving explainability, while essential, can be challenging. 

However, organizations can invest in explainable AI techniques (e.g., data visualization or decision tree), provide thorough documentation and foster a culture of open communication about the AI decision-making processes. 

These efforts help demystify the inner workings of AI systems and promote a more responsible, transparent approach to AI deployment.

Safeguarding privacy

Privacy is another key consideration for responsible AI implementation. Imagine a healthcare organization leveraging generative AI to predict patient outcomes based on electronic health records. Protecting the privacy of individuals is a must-have, top priority. Generative AI can inadvertently reveal sensitive information or generate synthetic data resembling real individuals. 

To address privacy concerns, businesses can implement best practices like data anonymization, encryption and privacy-preserving AI techniques, such as differential privacy. Concurrently, organizations must remain compliant with data protection regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA).

Complying with regulatory requirements

Finally, the evolving regulatory landscape for AI technologies demands a robust governance framework that guides ethical and responsible AI deployment. 

Organizations can refer to resources like the European Union’s Ethics Guidelines for Trustworthy AI or the Organisation for Economic Cooperation and Development (OECD) AI Principles to help define AI policies and principles. Establishing cross-functional AI ethics committees and developing processes for monitoring and auditing AI systems help organizations stay ahead of regulatory changes. By adapting to changes in regulations and proactively addressing potential risks, organizations can demonstrate their commitment to responsible AI practices. 

Responsible AI deployment

At Dell Technologies, we have articulated our ethical AI principles here. We know that responsible AI use plays a crucial role in an enterprise’s successful adoption of generative AI. To realize the extraordinary potential of generative AI, organizations must continuously improve and adapt their practices and address evolving ethical challenges like bias, fairness, explainability, transparency, privacy preservation and governance. 

Read about enterprise use cases for generative AI in this CIO.com article.

***

Dell Technologies. To help organizations move forward, Dell Technologies is powering the enterprise generative AI journey. With best-in-class IT infrastructure and solutions to run generative AI workloads and advisory and support services that roadmap generative AI initiatives, Dell is enabling organizations to boost their digital transformation and accelerate intelligent outcomes. 

Intel. The compute required for generative AI models has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today. Intel’s commitment to the democratization of AI and sustainability will enable broader access to the benefits of AI technology, including generative AI, via an open ecosystem. Intel’s AI hardware accelerators, including new built-in accelerators, provide performance and performance per watt gains to address the escalating performance, price and sustainability needs of generative AI.

Artificial Intelligence, Machine Learning

Over the last few months, both business and technology worlds alike have been abuzz about ChatGPT, and more than a few leaders are wondering what this AI advancement means for their organizations. Let’s explore ChatGPT, generative AI in general, how leaders might expect the generative AI story to change over the coming months, and how businesses can stay prepared for what’s new now—and what may come next.

What is ChatGPT?

ChatGPT is a product of OpenAI. It’s only one example of generative AI.

GPT stands for generative pre-trained transformer. A transformer is a type of AI deep learning model that was first introduced by Google in a research paper in 2017. Five years later, transformer architecture has evolved to create powerful models such as ChatGPT.

ChatGPT has significantly improved the number of tokens it can accept (4,096 tokens vs 2,049 in GPT-3), which effectively allows the model to “remember” more about a current conversation and informs subsequent responses with context from previous question-answer pairs in a conversation. Every time the maximum number of tokens is reached, the conversation resets without context—reminiscent of a conversation with Dory from Pixar’s Nemo.

ChatGPT was trained on a much larger dataset than its predecessors, with far more parameters. ChatGPT was trained with 175 billion parameters; for comparison, GPT-2 was 1.5B (2019), Google’s LaMBDA was 137B (2021), and Google’s BERT was 0.3B (2018). These attributes make it possible for users to enquire about a broad set of information.

ChatGPT’s conversational interface is a distinguished method of accessing its knowledge. This interface paired with increased tokens and an expansive knowledge base with many more parameters, helps ChatGPT to seem quite human-like.

ChatGPT is certainly impressive, and its conversational interface has made it more accessible and understandable than its predecessors. Meanwhile, however, many other labs have been developing their own generative AI models. Some examples are originating from MicrosoftAmazon Web ServiceGoogleIBM , and more, plus from partnerships among players. The frequency of new generative AI releases, the scope of their training data, the number of parameters they are trained on, and the tokens they can take in will continue to increase. There will be more developments in the generative AI space for the foreseeable future, and they’ll become available rapidly. It was 2 years from GPT-2 (February 2019) to GPT-3 (May 2020), 2.5 years to ChatGPT (November 2022), and only 4 months to GPT-4 (March 2023).

How ChatGPT and generative AI fit with conversational AI

Protiviti

Text-based generative AI can be considered a key component in a broader context of conversational AI. Business applications for conversational AI have, for several years already, included help desks and service desks. A natural language processing (NLP) interpretation layer underpins all conversational AI, as you must first understand a request before responding. Enterprise applications of conversational AI today leverage responses from either a set of curated answers or results generated from searching a named information resource. The AI might use a repository of frequently asked questions (producing a pre-defined response) or an enterprise system of record (producing a cited response) as its knowledge base.

When generative AI is introduced into conversational applications, it is impossible today to provide answers that include the source of the information The nature of generative capabilities of a large language model is to create a novel response by compiling And restructuring information from a body of information. This becomes problematic for enterprise applications, as it is often imperative to cite the information source to validate a response and allow further clarification.

Another key challenge of generative AI today is its obliviousness to the truth. It is not a “liar,” because that would indicate an awareness of fact vs. fiction. It is simply unaware of truthfulness, as it is optimized to predict the most likely response based on the context of the current conversation, the prompt provided, and the data set it is trained on. In its current form, generative AI will oblige information as prompted, which means your question may lead the model to produce false information. Any rules or restrictions on responses today are built in as an additive “safety” layer outside of the model construct itself.

For now, ChatGPT is finding most of its applications in creative settings. But one day soon, generative AI like ChatGPT will draw responses from a curated knowledge base (like an enterprise system of record), after which more organizations will be able to apply generative AI to a variety of strategic and competitive initiatives, as some of these current challenges could be addressed.

Leaders can start preparing today for this eventuality, which could come in a matter of months, if recent developments indicate how fast this story will continue to move: in November of 2022, ChatGPT was only accessible via a web-based interface. By March of 2023, ChatGPT’s maker OpenAI announced the availability of GPT3.5 Turbo, an application programming interface (API) via which developers can integrate ChatGPT into their applications. The API’s availability doesn’t resolve ChatGPT’s inability to cite sources in its responses, but it indicates how rapidly generative AI capabilities are advancing. Enterprise leaders should be thinking about how advances in generative AI today could relate to their business models and processes tomorrow.

What it takes to be ready

Organizations that have already gained some experience with generative AI are in a better position than their peers to apply it one day soon. The next impressive development in generative AI is fewer than six months away. How can organizations find or maintain an edge? The principles of preparing for the great “what’s next?” remain the same, whether the technology in question is generative AI or something else.

It’s hard to achieve a deep, experiential understanding of new technology without experimentation. Leaders should define a process for evaluating these AI technology developments early, as well as an infrastructure and environment to support experimentation.

They should respond to innovations in an agile way: starting small and learning by doing. They’ll keep track of innovation in the marketplace and look for opportunities to refresh their business and competitive strategies as AI advances become available to them.

They should seed a small cross-functional team to monitor these advancements and experiment accordingly. Educate that team about the algorithms, data sources, and training methods used for a given AI application, as these are critical considerations for enterprise adoption. If they haven’t already, they should develop a modular and adaptable AI governance framework to evaluate and sustain solutions, specifically including generative abilities, such as the high-level outline below:

Protiviti

Leaders need not wonder what ChatGPT, other generative AI, and other revolutionary technologies might mean for their business and competitive strategy. By remaining vigilant to new possibilities, leaders should create the environment and infrastructure that supports identification of new technology opportunities and prepare to embrace the technology as it matures for enterprise adoption.

Learn more about Protiviti’s Artificial Intelligence Services.

Connect with the Author

Christine Livingston
Managing Director, Technology Consulting

Artificial Intelligence, Machine Learning

Vlad Sejnoha, Partner at Glasswing Ventures, former CTO & SVP R&D at Nuance, and Kleida Martiro, Principal at Glasswing Ventures are contributing authors.

Generative AI (Artificial Intelligence) and its underlying foundation models represent a paradigm shift in innovation, significantly impacting enterprises exploring AI applications. For the first time, because of generative AI models, we have systems that understand natural language at a near-human level and can generate and synthesize output in various media, including text and images. Enabling this technology are powerful, general foundation models that serve as a basis or starting point for developing other, more specialized generative AI models. These foundation models are trained on vast amounts of data. When prompted with natural language instructions, one can use these learnings in a context-specific manner to generate an output of astonishing sophistication. An analogy to generative AI used to create images may be the talented artist who, in response to a patron’s instructions, combines her lifelong exposure to other artists’ work with her inspiration to create something entirely novel.

As news cycles eclipse one another about these advancements, it may seem like generative AI sprang out of nowhere for many business and executive leaders. Still, the reality is that these new architectures are built on approaches that have evolved over the past few decades. Therefore, it is crucial to recognize the essential role the underlying technologies play in driving advancement, enterprise adoption, and opportunities for innovation.

How we got here

The most notable enabling technologies in generative AI are deep learning, embeddings, transfer learning (all of which emerged in the early to mid-2000s), and neural net transformers (invented in 2017). The ability to work with these technologies at an unprecedented scale – both in terms of the size of the model and the amount of training – is a recent and critically important phenomenon.

Deep learning emerged in academia in the early 2000s, with broader industry adoption starting around 2010. A subfield of machine learning – deep learning – trains models for various tasks by presenting them with examples. Deep learning can be applied to a particular type of model called an artificial neural net, which consists of layers of interconnected simple computing nodes called neurons. Each neuron processes information passed to it by other neurons and then passes the results on to neurons in subsequent layers. The parameters of the neural net models are adjusted using the examples presented to the model in training. The model can then predict or classify new, previously unseen data. For instance, if we have a model trained on thousands of pictures of dogs, that model can be leveraged to detect dogs in previously unseen images.

Transfer learning emerged in the mid-2000s and quickly became popular. It is a machine-learning technique that uses knowledge from one task to improve the model performance on another task. An analogy to understand this powerful technique is learning one of the “Romance Languages,” like Spanish. Due to their similarities, one may find it easier to learn another romance language, like Italian. Transfer learning is essential in generative AI because it allows a model to leverage knowledge from one task into another related task. This technique has proven groundbreaking as it mitigates the scarcity of data challenge. Transfer learning can also improve the diversity and quality of generated content. For example, a model pre-trained on a large dataset of text can be fine-tuned on a smaller dataset of text specific to a particular domain or style. This allows the model to generate more coherent and relevant text for a particular domain or style.

Another technique that became prevalent in the early to mid-2000s was embedding. This is a way to represent data, most frequently words, as numerical vectors. While consumer-facing technologies, such as ChatGPT, demonstrate what feels like human-like logic, they are a great example of the power of word embeddings. Word embeddings are designed to capture the semantic and syntactic relationships between words. For example, the vector space representation of the words “dog” and “lion” would be much closer to each other than to the vector space for “apple.” The reason is that “dog” and “lion” have considerable contextual similarities. In generative AI, this enables a model to understand the relationships between words and their meaning in context, making it possible for models like ChatGPT to provide original text that is contextually relevant and semantically accurate.

Embeddings proved immensely successful as a representation of language and fueled an exploration of new, more powerful neural net architectures. One of the most important of such architectures, the “transformer,” was developed in 2017. The transformer is a neural network architecture designed to process sequential input data, such as natural language, and perform tasks like text summarization or translation. Notably, the transformer incorporates a “self-attention” mechanism. This allows the model to focus on different parts of the input sequence as needed to capture complex relationships between words in a context-sensitive manner. Thus, the model can learn to weigh the importance of each part of the input data differently for each context. For example, in the phrase, “the dog didn’t jump the fence because it was too tired,” the model looks at the sentence to process each word and its position. Then, through self-attention, the model evaluates word positions to find the closest association with “it.” Self-attention is used to generate an understanding of all the words in the sentence relative to the one we are currently processing, “it.” Therefore, the model can associate the word “it” with the word “dog” rather than with the word “fence.”

Progress in deep learning architectures, efficiently distributed computation, and training algorithms and methodologies have made it possible to train bigger models. As of the time of writing this article, the largest model is OpenAI’s ChatGPT3, which consists of 173 billion parameters; ChatGPT4 parameter information is not yet available. ChatGPT3 is also noteworthy because it has “absorbed” the largest publicly known quantities of text, 45TB of data, in the form of examples of text, all text content of the internet, and other forms of human expression.

While the combined use of techniques like transfer learning, embedding, and transformers for Generative AI is evolutionary, the impact on how AI systems are built and on the adoption by the enterprise is revolutionary. As a result, the race for dominance of the foundation models, such as the popular Large Language Models (LLMs), is on with incumbent companies and startups vying for a winner-take-all or take-most position.

While the capital requirements for foundation models are high, favoring large incumbents in technology or extremely well-funded startups (read billions of dollars), opportunities for disruption by Generative AI are deep and wide across the enterprise. 

Understanding the technology stack

To effectively leverage the potential of generative AI, enterprises and entrepreneurs should understand how its technology layers are categorized, and the implications each has on value creation.

The most basic way to understand the technologies around generative AI is to organize them in a three-layer technology “stack.” At the bottom of this stack are the foundation models, which represent a transformational wave in technology analogous to personal computing or the web. This layer will be dominated by entrenched incumbents such as Microsoft, Google, and Meta, rather than new startup entrants, not too different from what we saw with the mobile revolution or cloud computing. There are two critical reasons for this phenomenon. First, the scale in which these companies operate, and the size of their balance sheets are pretty significant. Secondly, today’s incumbents have cornered the primary resources that fuel foundation models: compute and data.

At the top of this stack are applications – software developed for a particular use case designed for a specific task. Next in the stack is the “middle layer.” The middle layer is where enabling technologies power the applications at the top layer and extend the capabilities of foundation models. For example, MosaicML allows users to build their own AI on their data by turning data into a large-scale AI model that efficiently runs machine learning workloads on any cloud in a user’s infrastructure. Notably, an in-depth assessment of the middle layer is missing from this discussion. Making predictions about this part of the stack this early in the cycle is fraught with risk. While free tools by incumbents seeking to drive adoption of their foundation models could lead to a commoditization of the middle layer, cross-platform or cross-foundational model tools that provide added capabilities and optimize for models best fit for a use case could become game-changers.

In the near term, preceding further development in the enabling products and platforms at the middle layer, the application layer represents the bulk of opportunities for investors and builders in generative AI. Of particular interest are user-facing products that run their proprietary model pipelines, often in addition to public foundation models. These are end-to-end applications. Such vertically integrated applications, from the model to the user-facing application layer, represent the greatest value as they provide defensibility. The proprietary model is valuable because continuously re-training a model on proprietary product data creates defensibility and differentiation. However, this comes at the cost of higher capital intensity and creates challenges for a product team to remain nimble.

Use cases in generative AI applications

Proper consideration of near-term application-layer use cases and opportunities for generative AI requires knowledge of the incremental value of data or content and a complete understanding of the implications of imperfect accuracy. Therefore, near-term opportunities will be those with a high value of incremental data or content, where more data or content has economic value to the business and low consequences of imperfect accuracy.

Additional considerations include the structure of the data for training and generation and the role of human-in-the-loop, an artificial intelligence system in which a human is an active participant and thus can check the work of the model.

Opportunities for entrepreneurs and enterprises in generative AI lie in use cases where data is very structured, such as software code. Additionally, human-in-the-loop can mitigate the risk of the mistakes an AI can make.

Industry verticals and use cases with these characteristics represent the initial opportunity with generative AI. They include:

Content creation: Generative AI can improve creativity, rate of content creation, and content quality. The technology can also be leveraged to analyze the performance of different types of content, such as blogs or social media ads, and provide insight into what is resonating with the audience.

Customer service and support: Generative AI can augment and automate customer service and support through chatbots or virtual assistants. This helps businesses provide faster and more efficient service to their customers while reducing the cost of customer service operations. By pre-training on large amounts of text data, foundation models can learn to accurately interpret customer inquiries and provide more precise responses, leading to improved customer satisfaction and reduced operating costs. Differentiation among new entrants leveraging generative AI will largely depend on their ability to use fine-tuned smaller models which enable a better understanding of industry-specific language, jargon, or common customer questions as a mechanism to deliver tailored support that meets the needs of each customer and to continuously refine products for more accurate and effective outcomes.

Sales and marketing: AI can analyze customer behavior and preferences and generate personalized product recommendations. This can help businesses increase sales and customer engagement. In addition, fine-tuned models can help sales and marketing teams target the right customers with the right message at the right time. By analyzing data on customer behavior, the model can predict which customers are most likely to convert and which messaging will be most effective. And that becomes a strong differentiator for a new entrant to capture market share.

Software and product development: Generative AI will simplify the entire development cycle from code generation, code completion, bug detection, documentation, and testing. Foundation models allow developers to focus on design and feature building rather than correcting errors in the code. For instance, new entrants can provide AI-powered assistants that are fine-tuned to understand programming concepts and provide context-aware assistance, helping developers navigate complex codebases, find relevant documentation, or suggest code snippets. This can help developers save time, upskill their abilities, and improve code quality.

Knowing the past to see the future

While we are still in the early days of the immense enterprise and startup value that generative AI and foundation models will unlock, everyone from entrepreneurs to C-suite decision-makers benefits from understanding how we arrived at where we are today. Moreover, understanding these concepts helps with realizing the potential for scale, reframing, and growing business opportunities. Knowing where the opportunities lie means making smart decisions about what promises to be an inspiring future ahead.

Artificial Intelligence, Enterprise, Startups

Generative AI (GenAI) is taking the world by storm. During my career, I’ve seen many technologies disrupt the status quo, but none with the speed and magnitude of GenAI. Yet, we’ve only just begun to scratch the surface of what is possible. Now, GenAI is emerging from the consumer realm and moving into the enterprise landscape. And for good reason; GenAI is empowering big transformations.

My previous article covered how an enterprise’s unique needs are best met with a tailored approach to GenAI. Doing so on the front end will avoid re-engineering challenges later. But how can enterprises use GenAI and large language models today? From optimizing back-office tasks to accelerating manufacturing innovations, let’s explore the revolutionary potential of these powerful AI-driven technologies in action across various industries.

Enterprise Use Cases for GenAI

GenAI fuels product development and innovation.  In product development, GenAI can play a crucial role in fueling the ideation and design of new products and services. By analyzing market trends, customer feedback and competitors’ offerings, AI-driven tools can generate potential product ideas and features, offering unique insights that help businesses accelerate innovation. For instance, automotive manufacturers can use GenAI to design lighter-weight components —via material science innovations and novel component designs — that help make vehicles more energy efficient.

GenAI crafts marketing campaigns

Large language models can produce highly personalized marketing campaigns based on customer data and preferences. By analyzing purchase history, browsing behavior and other factors, these models generate tailored messaging, offers and promotions for individual customers to increase engagement, conversion rates and customer loyalty. Gartner estimates that 30% of outbound marketing messages from enterprise organizations will be AI-driven by 2025, increasing from less than 2% in 2022.

GenAI enhances customer support

GenAI can provide instant, personalized responses to customer queries in an incredibly human-like manner. Large language models can offer relevant solutions, make product recommendations and engage in natural-sounding conversations. As a result, customers can gain faster response and resolution, and organizations can free up human agents to focus on more complex customer issues. For example, Amazon uses GenAI to power Alexa and its automated online chat assistant, both of which are available 24/7/365.

GenAI optimizes back-office tasks 

Generative AI models can automate and optimize various internal processes, such as drafting reports, creating standard operating procedures, and crafting personalized emails. Streamlining these tasks can reduce operational costs, minimize human error and increase overall efficiency.

GenAI writes software code

Through a technique known as neural code generation, GenAI enhances software development processes by automating code generation, refactoring and debugging. GenAI models can produce code snippets and suggest relevant libraries within the context and requirements of specific programming tasks. In this way, GenAI can help increase developer productivity, reduce errors and speed up development while providing more secure and reliable software. 

GenAI’s Powerful Potential

These diverse use cases demonstrate the immense potential of Generative AI and large language models to revolutionize the way enterprises operate—and no industry is exempt. Harnessing these cutting-edge technologies will usher in transformative ways for organizations to enhance customer experiences, drive innovation throughout operations and gain new levels of competitive differentiation. 

Because its capabilities are so revolutionary, AI will create a widening gap between organizations that embrace its transformative power and those that do not. Our own research shows that AI leaders are already advantaged over late adopters. While the urgency to leverage AI varies by company and industry, IDC, in that same research study, posits that we have reached the point where every organization must have an AI approach in place to stay viable. Thus, exploring AI and GenAI today, before the yawning gap grows, is a crucial step for organizations that want to secure their future.

Learn more.

***

To help organizations move forward, Dell Technologies is powering the enterprise GenAI journey. With best-in-class IT infrastructure and solutions to run GenAI workloads and advisory and support services that roadmap GenAI initiatives, Dell is enabling organizations to boost their digital transformation and accelerate intelligent outcomes. 

The compute required for GenAI models has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today. Intel’s commitment to the democratization of AI and sustainability will enable broader access to the benefits of AI technology, including GenAI, via an open ecosystem. Intel’s AI hardware accelerators, including new built-in accelerators, provide performance and performance per watt gains to address the escalating performance, price and sustainability needs of GenAI.

Artificial Intelligence