Back to all blog  posts

Wrapping Up 2023: Anticipating LLMs & GenAI Trends in The Year Ahead

Elad Schulman
Elad Schulman
calendar icon
Tuesday
,
December
26
clock icon
7
min read
On this page

2023 is marked by an overwhelming surge in the adoption of Generative AI (GenAI) across various industries, and it's time to reflect on the developments that shaped the landscape of cybersecurity and explore what lies ahead in 2024. The promises of GenAI are huge, but so are the challenges and considerations that come with its integration into security workflows.

Generative AI & LLMs: the Biggest Winners in 2023

Large Language Models (LLMs) like ChatGPT took center stage, with OpenAI's ChatGPT leading the pack. The adoption of LLMs reached unprecedented levels, gaining over a million users in the first week - the fastest-growing consumer app in history. 

February 2023 saw the announcement of Bard, Google's conversational generative artificial intelligence chatbot. It is beyond doubt that Bard is a serious contender, with around 140 million monthly visitors. In contrast to ChatGPT, which has a wide range of applications, Bard seems to be oriented towards enhancing information discovery and search-related functionalities.

Anthropic’s Claude has taken huge strides forward since its initial release early in 2023, and is poised to disrupt the market by drawing users, especially enterprise clients, away from larger players. Its latest model, Claude 2.1, distinguishes itself with a 200,000 token context window.

In 2024, Amazon Web Services (AWS) is gearing up to officially introduce Amazon Q within QuickSight. What sets this tool apart from other GenAI tools is its integration into the AWS console, enabling AWS users to streamline accessibility, and receive personalized answers. This makes a significant step forward in enhancing GenAI users, especially for the AWS ecosystem.

 

Meta also introduced its latest AI language model in 2023 - LLaMA. Within a week of accepting access requests for LLaMA, the model faced a leak. The incident has sparked a debate about open versus closed AI research. Some express concerns about potential consequences, faulting Meta for what they see as overly liberal technology distribution. Conversely, others argue that open access is crucial for developing safeguards for AI systems.

Looking Ahead: LLM & GenAI Predictions for 2024

If 2023 was the year of “wait and see”, 2024 is most definitely set to be a year of adoption and expansion. We can expect to see a general shift from experimentation to implementation. Mass adoption across verticals and geographies has already begun and will accelerate. 

Particular shifts we’re anticipating for the coming year

1. The Battle in the Model Market

We will likely see an accelerated path to production deployments of LLM and GenAI technology. The figure of 80% of enterprises using GenAI APIs or AI Apps by 2026 may turn out to be an underestimate if the speed of adoption continues to increase. 

Task-specific LLM tools such as Freed for Medical Scribe, Legal Documentation with Harvey, and Grammarly for writing assistance, have already begun to proliferate, created to serve the specific needs of particular industries and niches. Expect to see many more of these through 2024 and beyond.

One of the most interesting trends to watch in 2024 will be differentiation between different LLMs. A survey by Arize suggested that OpenAI’s dominance of this market declined between April and September, with other models making big gains at its expense. 

2. The Dark Side: Increase of Security Flaws and Attacks 

In a brief period, LLM technology has become integral to various industries. It also became a necessary tool for information security, leveraging LLM technology for security purposes. Proponents highlight its transformative potential, while detractors emphasize new cybersecurity vulnerabilities. As the year progresses, we anticipate challenges related to Shadow AI, security data flaws, and the need for more visibility and control will increase and become apparent.

Despite their power, LLMs should not be unquestionably trusted due to their advanced capabilities, exposing them to multiple security concerns. Hacking an LLM could grant access to its training data source, organizational information, and user-sensitive data, posing significant security risks. Ongoing research initiatives, like Nvidia's discovery of vulnerabilities and the Rizilion report on workflow patterns in LLMs, highlight the rapid but inadequately secured advancements in the field of Generative Artificial Intelligence.

3. Grab your hats: The Year of AI Regulation 

The global AI regulatory framework is experiencing a notable surge, particularly with a keen emphasis on enhancing data privacy and security. Europe is playing a leadership role in this arena, the EU AI Act groundbreaking legislation sets standards and mandates a focus on transparency.

In the United States, the discourse surrounding AI regulation continues by state-level initiatives such as California's temporary Deepfakes legislation, and New York's engagement with Automated Employment Decision Tools. However, as the calendar inches closer to 2024 President Biden's Executive Order also evolved and will place a significant emphasis on ensuring accountability within the AI domain.

2024 might probably be the Year of AI Regulation. The impact of these new legislations is likely to ripple through organizational frameworks, necessitating adaptability and a proactive approach to compliance and tighter collaboration between legal and security.

Excited for the future: Conclusion & Recommendations

As we transition from the transformative year of 2023 to the uncharted territory of 2024, the roadmap for LLMs and Generative AI in cybersecurity becomes clearer. The promises are abundant, but so are the challenges. Strategic planning, cautious implementation, and a focus on dedicated tools will be the keys to unlocking the true potential of GenAI in safeguarding data and organizations.

Insights and recommendations from the Lasso team for the year to come

  • While GenAI LLMs have traditionally been considered part of cybersecurity, the evolving challenges they face demand a shift in perspective. In 2024, LLM security technology is poised to emerge as a distinct market within the broader cybersecurity landscape.
  • Guide through the complexity of recently introduced GenAI features and their impacts on your product's security by implementing AI evaluation frameworks. 
  • Gain a clear understanding of which LLM tools are in use within the organization, whether by employees or applications, often referred to as 'Shadow AI'. Subsequently, understanding their usage and monitoring interactions will provide transparency into AI-related actions.
  • Security is a team sport- approach recommendations for policy, governance, and accountability comprehensively and holistically. Enterprises should aim for security and IT departments to collaborate with legal and governance for proactive solutions, ensuring a secure AI ecosystem.
Let's Talk