Skip to main content

Table 24 Summary of deep and reinforcement learning for intrusion detection

From: A comprehensive survey on machine learning for networking: evolution, applications and research opportunities

Ref. ML Technique Dataset Features Evaluation
     Settings Results
Cannady et al. [85] RL CMAC-NN (Online) Prototype Application Patterns of Ping Flood and UDP Packet Storm attacks -3 Layers NN -Prototype developed w/ C & Matlab Learning Error: 2.199-1.94 −07% New Attack Error:2.199-8.53 −14% Recollection Error: 0.038-3.28 −05% Error after Refinement: 1.24%
Servin et al. [407] RL Q-Learning (Online) Generated using NS-2 Congestion, Delay, and Flow-based -Number of Agents: 7 -DDoS attacks only -Boltzmann’s rules for E2 FP: 0-10% Accuracy:  70%-  99% Recall:  30%-  99%
Li et al. [273] DL DBN w/ Auto-Encoder (Offline) KDD Cup [257] all 41 features -494,021 training records -311,029 testing records -Intel Core Duo CPU 2.10 GHz and 2GB RAM -Platform used: Matlab v.7.11 -3 Layers Encoder: 41,300,150,75,* TPR: 92.20%-96.79% FPR: 1.58%-15.79% Accuracy: 88.95%-92.10% Training time: [1.147-2.625] sec
Alom et al. [14] DL DBN (Offline) NSL-KDD [438] 39 features -25,000 training & testing records DR w/ 40% data for training: 97.45% Training time w/ 40% data for training: 0.32 sec
Tang et al. [436] DL DNN (Offline) NSL-KDD [438] 6 basic features -125,975 training records -22,554 testing records -3-Layers DNN: 6,12,6,3,2 -Batch Size: 10 # Epochs: 100 -Best Learning Rate: 0.001 Accuracy: 72.0%5-75.75% Precision: 79%-83% Recall: 72%-76% F-measure: 72%-75%
Kim et al. [245] DL LSTM-RNN (Offline) KDD Cup [257] all 41 features -1,930 training data records -10 test datasets of 5000 records -Intel Core I7 3.60 GHZ, RAM 8GB, OS Ubuntu 14.04 -# Nodes in Input Layer: 41 -# Nodes in Output Layer: 5 -Batch Size:50 #Epoch:500 -Best Learning Rate:0.01 DR: 98.88% FP: 10.04% Accuracy: 96.93%
Javaid et al. [213] DL Self-taught Learning (Offline) NSL-KDD [438] all 41 features -125,973 training records -22,544 testing records -10-fold cross validation 2-class TP: 88.39% 2-class Precision: 85.44% 2-class Recall: 95.95% 2-class F-measure: 90.4%