Pricing Login
Pricing
Support
Demo
Interactive demos

Click through interactive platform demos now.

Live demo, real expert

Schedule a platform demo with a Sumo Logic expert.

Start free trial
Back to blog results

May 21, 2024 By Manny Lopez

Was RSA Conference AI-washed or is AI in cybersecurity real?

AI-washing at RSAC

RSA Conference, held annually in San Francisco in the spring, defines itself as an information security event that connects industry leaders and highly relevant information. 50,000 people attended in 2024, and of course, the Sumo Logic team was there to offer insights and to learn from others at the conference.

During a LinkedIn Live from the show, Sumo Logic VP of Product Marketing Michael Cucchi talked about the show floor being noisy and repetitive. While last year everyone was talking about automation, this year was the year of AI. Every vendor’s booth and numerous talks included the phrase “AI-powered” or “AI-driven.” Yes, even our booth.

But how much of that AI is thoughtful, productive and additive? And even more frustrating for the security practitioners hoping to get hands-on experience with real products that can help them with their day-to-day work, how much of that was AI-powered vaporware?

Three main approaches to AI at RSAC

Even as ChatGPT has become pervasive in our daily lives, AI can take on different forms in the world of cybersecurity. From automation to threat detection investigation and response (TDIR) and yes, of course, Copilots, numerous vendors showcased their interpretations of adding AI to their products.

  • AI for automation: Security vendors are using AI to automate tasks like data normalization and analysis, previously done manually by SOC analysts. This not only accelerates response times but also improves the precision of threat detection and incident response. Automating data ingestion and normalization enables security teams to concentrate on strategic threat mitigation instead of data management.

  • AI for advanced threat detection and automated response capabilities: Vendors are leveraging AI to enhance alert management in security operations, prioritizing genuine threats from the multitude of alerts. By employing large language models, these AI solutions efficiently distill alerts to the most critical ones, presenting them via an intuitive interface. This approach enables Security Operations Centers (SOCs) to focus on significant threats, moving away from traditional, labor-intensive SIEM processes.

  • AI for copilots/assistants: Other vendors are using AI to help democratize the usage of SIEM by providing security analysts with an AI Assistant, which offers them a chat interface to ask questions using natural language. Analysts use natural language processing (NLP) to assemble timelines, refine threat intelligence research, determine risk, find artifacts on networks, and then begin guided threat hunting. These tasks are often exceptionally time-consuming and prone to user error.

We found it particularly intriguing that some vendors, notably Microsoft, SentinelOne and Crowdstrike, are charging extra for their AI rather than including it in their security offerings. If AI is the future and a must-have for all security practitioners, it would seem strange to gate it behind further paywalls.

Is AI being used to solve the right cybersecurity problems?

Technology can often fall into a weird chasm of solving problems that don’t exist – anyone remember when Segways were going to be big? And don’t even ask me how often my virtual assistant is used primarily as a glorified kitchen timer and celebrity death checker rather than whatever amazing sales device Amazon intended her for.

When it comes to AI for cybersecurity, that chasm is abundantly clear when it’s used to solve architectural problems rather than analytics challenges. We saw vendors at RSAC who purported to use AI to find which data was security-related and relevant for their platform, leaving the rest behind. This solved an issue of scale that was actually already solved.

Cloud-native solutions don’t need to worry about scalability or limited data ingestion. With a cloud-native SaaS security solution, you can keep all the relevant data at your fingertips for when you need it most.

This is a sentiment repeated by Allie Mellen, Principal analyst at Forrester. In a recent blog post, she explains:

The development of foundation models for other tasks [...] can benefit security operations as well in other ways. However, it still hasn’t solved many of the fundamental problems of security. Until we get those right, we should be wary of how and what we use generative AI for.

AI-washing

With AI so pervasive in the zeitgeist, it’s not surprising that almost every vendor had some form of “AI-powered” or “AI-driven” messaging on their booth walls. Sadly, just because you write it in eight-inch letters doesn’t make it true.

When you turned your attention to the vendors shouting “AI-driven” into the crowded show floor and dig deeper, unfortunately it panned out to be a number of false promises. Many vendors simply relied on clickable demos and wireframes to demonstrate the art of the possible, and even then, couldn’t link those capabilities to solid security workflow benefits. Still more vendors simply renamed legacy features and capabilities and stamped them with “AI” or “ML” powered.

In fact RSAC attendees were a bit let down when viewing demos of those AI capabilities from household, big-name tech companies known for their other AI capabilities, the copilot experiences on the show floor were instead click-through wireframes. While their messaging was confident in the value of AI in their product, it appears security-focused copilot applications that go beyond a basic chatbot may be further from general availability than it seems. So maybe this is instead the year of AI driven promises… But not reality.

Too many high-profile examples pop up in the news about AI not being what it’s purported to be, however, there were some stellar showings from practitioners doing truly unique things with AI in cybersecurity at RSA Conference.

But AI is real and it was clear at RSAC

There were numerous AI-focused speaker sessions at RSAC. Some talks were packed, others less popular, but all shared tangible ways AI is being used right now.

Eileen Isagon Skyers framed the conversation nicely in her talk “In the Age of AI, Everything is the Art of Possible.” She described that AI optimists see how AI can be helpful while AI pessimists view AI as a threat. Through this lens, it becomes clear that those who are most nervous about AI see the potential for harm in the hands of bad actors while the other side can imagine the benefits possible when the “good guys” use AI.

This mentality was mirrored in a panel on AI titled “Artificial Intelligence: The Ultimate Double-Edged Sword,” which featured impressive speakers from Stanford and the US Justice Department. Deputy Attorney General Lisa Monaco shared how Justice AI is a project that sifts through and triages over a million tips that the FBI gets, meanwhile, how dangerous AI could be in the upcoming election cycle to supercharge disinformation campaigns with social engineering. That same concern was top of mind in Harvard researcher and lecturer Bruce Schneier’s talk AI and Democracy.

AI is real and in use at Sumo Logic as well. AI-driven alerting reduces alert fatigue, using patent-pending anomaly detection capabilities to eliminate 60-90% of alerts. These alerts can also trigger one or more automation playbooks to drive auto-diagnosis or remediation and accelerate response times. Our Global Intelligence Service uses AI to provide real-time and actionable insights and trends across our 2300 customers while Cloud SIEM Insights Trainer offers suggestions on rule tuning within Cloud SIEM to reduce false positives.

AI is more than generative AI, but we know that natural language processing and chatbot capabilities are the new shiny objects everyone wants. To us, the key is not the shiny feature, it’s how we apply it to change the way we do threat detection and response. Beyond a simple chatbot, we’ve leveraged ML and generative AI to surface relevant anomalies and suggested insights to users, and translate highly complex analytics into plain language and easy-to-understand visualizations.

So, of course, we also showed off our copilot in our demo environment, not just wireframes. You can watch a recorded version of the demo here and Sumo Logic customers can even try it yourself in our new tech preview available now. Have a peek:

The cybersecurity industry is always evolving, and RSA Conference serves as a helpful milestone to evaluate trends and progress. AI is certainly a trend and definitely part of all our futures. We just hope that AI-washing eventually rinses out and we can be left with a truly AI-powered cybersecurity vision for the future.

Learn more about the impact of AI in cybersecurity.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Manny Lopez

Manny Lopez

Director, Competitive Intelligence

Manny Lopez has more than 20 years of experience in market research, focused primarily on competitive analysis. His diverse background includes, most recently, a 5+ year stint at Druva. He also has a broad background in market research that spans many years, including being a research analyst with IDC for over eight years, based in their Hong Kong and Beijing offices.

More posts by Manny Lopez.

People who read this also enjoyed