Ref. | ML Technique | Network | Dataset | Features | Classification | Evaluation | |
---|---|---|---|---|---|---|---|
 |  |  |  |  |  | Settings | Results |
Liu et al. [282] | Unsupervised: · EM for HMM | Hybrid wired and wireless | Synthetic data: · ns-2 simulation · 4-linear topology Data distribution: · Training = 10k | · Loss pair RTT | · Congestion loss · Wireless loss | · 4-state HMM · Gaussian variables · Viterbi inference | HMM accuracya: ·44−98% |
Barman and Matta [38] | Unsupervised: · EM for HMM | Hybrid wired and wireless | Synthetic data: · ns-2 simulation · Topology: - 4-linear - Dumbbell | · Loss pair delay · Loss probabilities: - Congestion - Wireless (nw)nw: network support | · Congestion loss · Wireless loss | · 2-state HMM · Gaussian variables · Bayesian inference · Discretized values: - 10 symbols | HMM accuracya: ·92−98% |
Supervised: · Boosting DT · DT · RF · Bagging DT · Extra-trees · MLP-NN ·k-NN | Hybrid wired and wireless | Synthetic data: · Simulation in: - ns-2 - BRITE ·> 1k random topologies Data distribution: · Training = 25k· Testing = 10k | 40 features applying avg, stdev, min, and max on parameters: · One-way delay · IAT And on packets: · 3 following loss · 1 before loss · 1/2 before RTT [130] finds that adding the number of losses is insignificant | · Congestion loss · Wireless loss | Ensemble DT: · 25 trees NN: · 40 input neurons · 2 hidden layers with 30 neurons · 1 output neuron · LMAb learning k-NN: ·k=7 | AUC (%)c: · 98.40 · 94.24 · 98.23 · 97.96 · 98.13 · 97.61 · 95.41 | |
Fonseca and Crovella [150] | Supervised: · Bayesian | Wired | Real data: · PMA project · BU Web server | · Loss pair RTT | · Congestion loss · Reordering | · Gaussian variables · 0 to 3 historic samples | In PMA: · TPR = 80% · FPR = 40% In BU: · TPR = 90% · FPR = 20% |
Jayaraj et al. [214] | Unsupervised: · EM for HMM · EM-clustering | Optical | Synthetic data: · ns-2 simulation · NSFNET topology Data distribution: · Training = 25k· Testing = 15k | · Number of bursts between failures | · Congestion loss · Contention loss | HMM: · 8 states · Gaussian variables · Viterbi inference · 26 EM iterations Clustering: · 8 clusters · 24 EM iterations | CVc: ·0.16−0.42·0.15−0.28 HMM accuracya: ·86−96% |