Skip to main content

Table 11 Summary of AQM schemes with online training in the intermediate nodes of a wired network

From: A comprehensive survey on machine learning for networking: evolution, applications and research opportunities

Ref. ML Technique Multiple Synthetic data from Features Output Evaluation
   Bottlenecka ns-2 simulation   (action-set for RL) Settings Results
PAQM [160] Supervised: · OLS Topology: · 6-linear · Arbitrary dumbbell Time =50s · Traffic volume (bytes) TSF: · Traffic volume · NMLS algorithm based on LMMSE Accuracy: ·90−92.3%
APACE [212] Supervised: · OLS Topology: · Dumbbell (1-sink) · 6-linear Time =40s · Queue length TSF: · Queue length · NMLS algorithm based on LMMSE Accuracy: · 92%
α_SNFAQM [498] Supervised: · MLP-NN Topology: · Dumbbell (1-sink) Time =300s · Traffic volume · Predicted traffic volume TSF: · Traffic volume · 2 input neurons · 2 hidden layers with 3 neurons · 1 output neuron Accuracy: ·90−93%
NN-RED [179] Supervised: · SLP-NN Topology: · Dumbbell Time =900s · Queue length TSF: · Queue length · 1+N input neurons (N past values) · 0 hidden layers · 1 output neuron · Delta-rule learning N/A
DEEP BLUE [298] Reinforcement: · Q-learning - ε-greedy Topology: · Dumbbell Time =50sOPNET simulator instead of ns-2 States: · Queue length · Packet drop prob. Reward: · Throughput · Queuing delay Decision making: · Increment of the packet drop probability (finite: 6 actions) ·N/A states · 6 actions ·ε-greedy ASSb Optimal packet drop probability: · Outperforms BLUE [144]
Neuron PID [428] Reinforcement: · PIDNN Topology: · Dumbbell Time =100s · Queue length error Decision making: · Increment of the packet drop probability(continuous) · 3 input neurons · 0 hidden layers · 1 output neuron · Hebbian learning · 1 PID component QLAcc errorc: · 7.15 QLJit: · 20.18
AN-AQM [427] Reinforcement: · PIDNN Topology: · Dumbbell · 6-linear Time =100s · Queue length error · Sending rate error Decision making: · Increment of the packet drop probability(continuous) · 6 input neurons · 0 hidden layers · 1 output neuron · Hebbian learning · 2 PID components QLAcc errorc: · 6.44 QLJit: · 22.61
FAPIDNN [485] Reinforcement: · PIDNN Topology: · Dumbbell Time =60s · Queue length error Decision making: · Increment of the packet drop probability(continuous) · 3 input neurons · 0 hidden layers · 1 output neuron · 1 PID component · 1 fuzzy component QLAcc errorc: · 3.73 QLJit: · 31.8
NRL [499] Reinforcement: · SLP-NN Topology: · Dumbbell Time =100s · Queue length error · Sending rate error Decision making: · Increment of the packet drop probability(continuous) · 2 input neurons · 0 hidden layers · 1 output neuron · RL learning QLAcc errorc: · 38.73 QLJit: · 128.84
  1. aSpecifies if the approach was evaluated for multiple bottleneck links () or simply for a single bottleneck link (–)
  2. bAction Selection Strategy (ASS)
  3. cValue computed using RMSE on the results presented in [269] for different network conditions