Leveraging LLMs for Non-Security Experts in Threat Hunting: Detecting Living off the Land Techniques
This paper explores the potential use of Large Language Models (LLMs), such as ChatGPT, Google Gemini, and Microsoft Copilot, in threat hunting, specifically focusing on Living off the Land (LotL) techniques. LotL methods allow threat actors to blend into regular network activity, which makes detect...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-03-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/7/2/31 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper explores the potential use of Large Language Models (LLMs), such as ChatGPT, Google Gemini, and Microsoft Copilot, in threat hunting, specifically focusing on Living off the Land (LotL) techniques. LotL methods allow threat actors to blend into regular network activity, which makes detection by automated security systems challenging. The study seeks to determine whether LLMs can reliably generate effective queries for security tools, enabling organisations with limited budgets and expertise to conduct threat hunting. A testing environment was created to simulate LotL techniques, and LLM-generated queries were used to identify malicious activity. The results demonstrate that LLMs do not consistently produce accurate or reliable queries for detecting these techniques, particularly for users with varying skill levels. However, while LLMs may not be suitable as standalone tools for threat hunting, they can still serve as supportive resources within a broader security strategy. These findings suggest that, although LLMs offer potential, they should not be relied upon for accurate results in threat detection and require further refinement to be effectively integrated into cybersecurity workflows. |
---|---|
ISSN: | 2504-4990 |