AI Solutions for Enterprise Are Only as Good as Their Use Cases: Lessons From the AI Era So Far

Learn how the major successes—and setbacks—of the LLM boom can help data leaders lock in on a winning data strategy and build the enterprise AI solutions of tomorrow.
May 14, 2024
Hakkoda - AI solutions for enterprise - Thumbnail

The AI boom of the last five years, a period already being referred to as the “AI era” and the 4th Industrial Revolution, has proven a huge disruption in the global marketplace. Over the course of just a few short years (and one global pandemic), we have watched powerful large language models (LLMs) and other generative AI technologies transform every industry from their foundations up, and the explosive proliferation of these technologies shows no sign of stopping any time soon. Spurred on by these rapid developments, organizations big and small have revealed a growing interest in AI solutions for enterprise use cases, which they hope will help them come out ahead on the other side of the AI craze. 

But the sheer volume of AI offerings on the market and the lightspeed velocities at which more emerge daily has had a freezing effect on some data leaders. How does one go about evaluating all these different AI solutions, and what are the best use to go after if enterprise users want to see the best returns on their technology investments? Hakkoda’s Generative AI consultants argue that if data leaders want to better predict the future of AI and turn these predictions into watertight AI strategies, they should start by reflecting at length on the developments and challenges of the LLM boom so far, which have seen many organizations learn critical lessons—more often than not learning them the hard way. 

In this blog, we will take a look at some of the biggest ways the AI landscape has shifted since January of 2023, and how those changes can help shape the AI strategies of tomorrow.

Hakkoda - AI solutions for enterprise - Image 1

How LLMs and Natural Language Processing Took the World By Storm

OpenAI’s launch of ChatGPT on November 30, 2022, pointed to an important shift in AI development. The fact that this release was followed almost immediately with a multibillion-dollar deal between OpenAI and Microsoft all but guaranteed that large language models (LLMs) would be the center of the AI conversation for the foreseeable future.

Advancements in natural language processing (NLP), meanwhile, meant AI technologies were demonstrating their ability to understand and generate human language with unprecedented levels of sophistication. This leap forward has facilitated more intuitive and effective human-computer interactions while demonstrating AI’s facility with unstructured data, which has been a historical pain point for data science and analysis. 

By January of 2023, researchers at MIT were already collaborating with the Mass General Hospital to develop deep-learning models capable of assessing a patient’s risk of lung cancer using unstructured data from CT scans. Even risk-averse industries, like banking and financial services, were working with Gen AI on a range of enterprise solutions, including risk assessment, anomaly detection, and investment portfolio customization.

Researchers were also hard at work on an AI able to build complex enzymes and other proteins from scratch. These innovations reflect a significant shift towards more intelligent, adaptable, and efficient AI systems capable of tackling complex challenges in novel ways.

Trouble in Paradise: The Samsung Data Leak Highlights AI Security Challenges

With emergent technologies, however, come emergent security challenges—especially when that technology continually trains itself on vast quantities of data.

Following an episode in May of 2023 where employees of Samsung Electronics Co. accidentally leaked sensitive data using ChatGPT, companies were forced to confront the possible disclosure of sensitive information and other intellectual property risks posed by Gen AI head-on, even going so far as to give this new cybersecurity threat a name: “conversational AI leak.” 

The immediate response was a series of crackdowns on AI tools and services in the workplace, both at Samsung and at other major institutions including Amazon, JPMorgan Chase, Bank of America, Citigroup, Deutsche Bank, Wells Fargo, and Goldman Sachs. 

Security and privacy advocate groups attempted, with very limited success, to take legal measures that would limit the outputs of ChatGPT and similar offerings, but it quickly became apparent such attempts missed the underlying source of the data leak problem. Bans on ChatGPT introduced in Italy and Germany were also short-lived. As open-source and community LLMs continued to grow in popularity, it would only become increasingly difficult to monitor and restrict outputs. 

At the same time, more companies started to think seriously about bringing generative AI in-house—with Goldman Sachs and, yes, Samsung, announcing their own enterprise AI solutions were in development.

Their rationale was simple: by taking development and maintenance of an AI solution in-house, they will have greater control over both the input and output of their models. Strictly internal tools would also be substantially less likely to leak sensitive information than their publicly available counterparts.

Hakkoda - AI solutions for enterprise - Image 2

Bringing Gen AI Solutions for Enterprise to the Cloud

With a finger on the pulse of this new surge in enterprise AI use cases, Snowflake didn’t waste time in deepening their collaboration with NVIDIA to bring a more robust AI offering to their customers. 

“Data is the fuel for AI, making it essential to establishing an effective AI strategy,” explained Sridhar Ramaswamy, Snowflake CEO, in reference to the joint endeavor. “Our partnership with NVIDIA is delivering a secure, scalable and easy-to-use platform for trusted enterprise data. And we take the complexity out of AI, empowering users of all types, regardless of their technical expertise, to quickly and easily realize the benefits of AI.”

Snowflake, also keenly aware that upwards of 90% of the world’s data is unstructured, continued to build on its AI offerings with the news that that it would be investing in Landing AI, a global leader in the domain of Large Vision Models (LVMs) and developers of LandingLens. Unlike LLMs, which function through and around natural language processing, LVMs specialize in extracting insights from unstructured image and video data. LandingLens leverages LVM technology to provide a platform specifically designed for enterprise use cases, including an intuitive workflow that enables data teams to create bespoke solutions regardless of their industry.

Evaluating LLMs—Using, You Guessed It, Other LLMs

As leaders in the AI space continue to grapple for supremacy in a market saturated with LLM offerings, and as a growing contingent of businesses take AI implementation in-house, another challenge is starting to emerge: how to accurately measure the strengths and weaknesses of all these different models. 

Because of LLMs’ natural language capabilities, which can present even highly accurate information in a variety of drastically different ways, many traditional benchmarks are no longer useful. Two options, then, remain: human evaluation and, that’s right, using LLMs to evaluate other LLMs. 

The former approach is perhaps the most natural solution to this particular problem, but runs into trouble as the question of scalability is introduced. LLM-as-a-judge evaluations, on the other hand, are growing in popularity as LLMs like ChatGPT-4 have demonstrated that their evaluations match human preferences 80% of the time on criteria like helpfulness, harmfulness, and relevance.

As the race to create more accurate, relevant, and human-sounding outputs continues to wage onward, quality assurance has also made its way into the design principles of LLMs themselves. 

Retrieval augmented generation (RAG), for example, is a natural language processing (NLP) technique that leverages retrieval-based techniques within generative AI models to create accurate and relevant outputs. The result is a model that is more efficient and easier to train while also being better than historical LLMs at synthesizing information and superimposing relevant context. 

Returns on Investment: Getting Strategic About AI Solutions for Enterprise

As we venture further into an AI-infused future, the only certainty is that the terrain will continue to change.

In all likelihood, we will continue to see the emergence of increasingly autonomous systems capable of complex decision-making and problem-solving, which will reduce the need for human intervention and enable more seamless operations across multiple sectors. The winners and the losers of the AI war have yet to be determined—a fact that will continue to make it hard for businesses to determine the right AI investments for their needs.

Fortunately, no one has to make those calls alone. Data partners like Hakkoda were established with exactly these subtleties and complexities of the modern data stack in mind. We combine the deep industry knowledge needed to align AI investments with critical business objectives, combined with certified expertise in over 200 partner technologies, to help our customers design and implement AI solutions that meaningfully transform operations and deliver strong returns on up-front investments.

Our AI copilots are already shaking up complex industries in myriad ways: automating cumbersome legal workflows, empowering employees with robust internal knowledge-sharing, streamlining report rationalization and data migration processes, and introducing an intuitive, natural language-based approach to data analytics that can inform key decision-making, refine market forecasting, and even improve patient outcomes at scale.

Ready to catch up to trendsetters in your field and implement the enterprise AI solutions that will help future-proof your business? Let’s talk today.

Native Apps for Snowflake: A Monetization Guide

Native Apps for Snowflake: A...

Explore the flexible pricing models available for businesses looking to monetize their data using Native Apps for Snowflake.
Data Cloud Summit 2024: Getting To Know San Francisco’s Yerba Buena Neighborhood

Data Cloud Summit 2024: Getting...

Summit 2024 is coming soon to the Moscone Center in San Francisco's Yerba Buena neighborhood. Here are some great local…
Enter Data Enablement: How Data Governance Quality Empowers Key Stakeholders and Fosters Organizational Alignment

Enter Data Enablement: How Data...

Learn how data enablement, née governance, bridges organizational gaps while treating your data as the valuable asset it is.

Never miss an update​

Join our mailing list to stay updated with everything Hakkoda.

Ready to learn more?

Speak with one of our experts.