The impact of machine learning on security

It’s important to cut through the hype surrounding ML and AI

Credit: 149904290 © Denisismagilov Dreamstime.com

Machine learning (ML) is changing security in much the same way that it’s changing other fields. Although many tools are immature or overhyped, vendors are starting to offer legitimately effective solutions for a growing collection of security uses.

Today’s ML-based tools are rarely designed to fully replace existing traditional tools. However, they can be a powerful addition to your toolkit when aimed at a specific high-value use to solve problems that involve data classification, pattern recognition and anomaly detection.

The most appropriate ways to use ML are to learn the characteristics of the environment being protected; the characteristics of “known bad” behaviour; and to support decision making by security analysts. It’s most suitable where traditional methods are intractable, inefficient or simply impossible, as well as where relevant data of high quality is sufficiently available.

ML reaches almost every area of security and identity, including the detection of anomalous user behaviour, signatureless malware detection, advanced vulnerability prioritisation, phishing and fraud detection, network anomaly detection and bot mitigation.

Be aware of the drawbacks

There are a number of drawbacks, however, to applying ML and other artificial intelligence (AI) approaches to security. Most ML tools are black-box in nature and can also be difficult to audit. In addition, appropriate security data may not be sufficiently available.

ML can adapt to new instances of known threats, but it can’t adapt to entirely new threat vectors. Security threat analysts still need to understand truly novel threats before new ML can be developed to detect them.

The output of an ML model is a statistical probability, not an absolute answer. The probabilistic nature can generate many false positives, make alerts more difficult to triage and increase the complexity of tuning. Teams with relevant skills in data science are more likely to extract maximum value from ML-based security products.

The best approach is to avoid the search for ML itself. Instead focus on better overall security efficiency and effectiveness. As security solutions mature, ML will be incorporated as a way to achieve better results where it makes sense.

When selecting products, consider additional success criteria beyond detection capabilities, such as false positive rates, tuning effort and data requirements. Keep in mind that ML-based solutions are best positioned to augment, rather than replace, existing tooling and skill sets.

Although embracing new technology on all fronts is rarely a wise approach, there are steps security professionals can take right now to understand and harness ML to meet their business needs.

Cut through the hype

There’s a huge amount of marketing hype surrounding ML and AI, so it’s important to cut through the hype and carefully scrutinise vendor claims. Ask for clarification about their particularly stunning claims about detecting previously unheard of attacks, reducing false positive rates or otherwise solving all of your security problems.

Stay abreast of emerging threat vectors

ML can undoubtedly be applied to security, but some areas such as discovering and understanding new threats, can only be performed by humans. This is because there’s no AI yet that can find threats not defined by a human. Most vendors use misleading language to make people think they can do it, but that type of AI doesn’t exist.

Until it does, continuously assess the threat landscape. Ransomware wasn’t a threat on most organisations’ radar 10 years ago, but it’s a main concern now. Many security tools have evolved to prevent, detect and respond to those threats, and new tools were created with the same objectives. New threats require humans to assess existing capabilities for appropriateness, and it’s not different for ML-based tools.

Selecting the right tools

Employ a use-based approach to selecting tools. A user and entity behaviour analytics (UEBA) tool can be matched to a need such as detecting malicious insiders, detecting patient information snooping or identifying account sharing. For anti-malware based on ML, focus on the types of malware missed by your regular tools.

Evaluate the weaknesses and gaps in your current security capabilities before shopping for new ML-based tools. Expect ML to augment, rather than replace, existing tools.

Start with desired outcomes

Although you may be tempted to ask vendors what insight they can derive from a given dataset, don’t bother unless you have specific expectations. Otherwise comparing tools will become difficult if they reveal different findings from the same dataset. This exercise may also lead to a “dazzle with science” syndrome, where vendors reveal curious, but operationally useless, insights.

Before testing, create a test plan with a timeline, desired uses and needs, as well as detailed requirements for data sources. Don’t settle for “getting the logs in.” Instead, identify specific types of data, needed configuration changes and details of the people controlling the systems.

Carefully test the tools

One of the most effective ways to cut through the hype is to carefully test the tools before purchasing and deploying them. This may sound painfully obvious, but the rise of ML methods and AI claims have given it newly critical importance.

You can compare, for example, which of the traditional security information and event management (SIEM) solutions has more rules by simply counting the rules. However, you can’t judge the effectiveness of an ML brain for your environment without actually running the tool in your environment and on your production data.

You may be tempted to test the new tool like a rule-based security product, but you must test it like the ML-based product that it is.

Anna Belak is a senior principal analyst at Gartner in the security and risk management team. Anna will be presenting on ‘AI as Target and Tool: An Attacker’s Perspective on ML’ at the Gartner Security and Risk Management Summit in Sydney (19-20 August).

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Gartnercyber securitymachine learningartificial intelligence (AI)security

More about Gartner

Show Comments