Today’s security operations centers (SOC) have to manage data, tools and teams dispersed across the organization, making threat detection and teamwork difficult. There are many factors driving complex security work. Many people now work from home with coworkers in far-away places. The cost and maintenance of legacy tools and the migration to cloud also make this more complex. So do hybrid environments and the multiple tools and vendors in use. Taking all these factors into account, the average analyst’s job has become more difficult than ever. Often, tracking down a single incident requires hours or even days of collecting evidence. That’s where artificial intelligence (AI) in cybersecurity comes in.

Analysts might spend a lot of time trying to gather data, sifting through gigabytes of events and logs and locating the relevant pieces. While they try to cope with the sheer volume of alerts, attackers are free to come up with ever more inventive ways of conducting attacks and hiding their trails.

What AI in Cybersecurity Can Do

AI makes the SOC more effective by reducing manual analysis, evidence gathering and threat intelligence correlation — driving faster, more consistent and accurate responses.

Some AI models can tell what type of evidence to collect from which data sources. They can also locate the relevant among the noise, spot patterns used in many common incidents and correlate with the latest security data. AI in cybersecurity can generate a timeline and attack chain for the incident. All of this leads the way to quick response and repair.

AI security tools are very effective in finding false positives. After all, most false positives follow common patterns. X-Force Red Hacking Chief Technology Officer Steve Ocepek reports that his team sees analysts spending up to 30% of their time studying false positives. If an AI can take care of those alerts first, humans will have more time and less alert fatigue when they handle the most important tasks.

The Human Element of AI Security

While the demand for skilled SOC analysts is increasing, it is getting harder for employers to find and retain them. Should you instead aim to completely automate the SOC and not hire people at all?

The answer is no. AI in cybersecurity is here to augment analyst output, not replace it. Forrester analyst Allie Mellen recently shared a great take on this issue.

In “Stop Trying To Take Humans Out Of Security Operations,” Allie argues that detecting new types of attacks and handling more complex incidents require human smarts, critical and creative thinking and teamwork. Often effectively talking to users, employees and stakeholders can lead to new insights where data is lacking. When used along with automation, AI removes the most boring elements of the job. This allows analysts time for thinking, researching and learning, giving them a chance to keep up with the attackers.

AI helps SOC teams build intelligent workflows, connect and correlate data from different systems, streamline their processes and generate insights they can act on. Effective AI relies on consistent, accurate and streamlined data. The workflows created with the help of AI in turn generate better quality data needed to retrain the models. The SOC teams and AI in cybersecurity grow and improve together as they augment and support each other.

Is it time to put AI to work in your SOC? Ask yourself these questions first.

Register for the webinar: SOAR

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today