Table 17: Evaluation metrics.
Evaluation Metrics | #used | Papers |
Recall | 31 | [FS43] [FS15] [FS7] [FS34] [FS36] [FS44] [FS25] [FS45] [FS3] [FS46] [FS18] [FS27] [FS37] [FS19] [FS38] [FS20] [FS39] [FS2] [FS5] [FS48] [FS22] [FS31][FS24] [FS6] [FS42] [FS49] [FS51][FS53] [FS54] [FS55] [FS56] |
Precision | 32 | [FS15] [FS7] [FS34] [FS35] [FS36] [FS44] [FS25] [FS45] [FS3] [FS46] [FS18] [FS27] [FS37][FS19] [FS20] [FS39] [FS10] [FS29] [FS2] [FS5] [FS48] [FS22] [FS31] [FS24] [FS6] [FS42] [FS49][FS51] [FS53] [FS54] [FS55] [FS56] |
Entropy | 7 | [FS32] [FS27] [FS21] [FS12] [FS51] [FS55][FS56] |
F-Measure | 16 | [FS32] [FS15] [FS33] [FS34] [FS8] [FS27] [FS37] [FS20] [FS39] [FS29][FS5][FS22][FS14][FS49][FS55][FS57] |
Precision@N | 6 | [FS16][FS44][FS8][FS10][FS2][FS14] |
Recall@N | 1 | [FS14] |
Mean Absolute Error (MAE) | 11 | [FS1][FS4][FS26][FS9][FS21][FS30][FS11][FS22][FS40][FS50] [FS52] |
Normalized Mean Absolute Error (NMAE) | 5 | [FS1][FS17][FS9][FS11][FS40] |
Purity | 7 | [FS32] [FS46] [FS27] [FS21] [FS12][FS51][FS56] |
Time | 4 | [FS43][FS18][FS38][FS48] |
Accuracy | 6 | [FS7][FS25][FS38][FS47][FS13][FS42] |
Silhouette | 2 | [FS29][FS47] |
Quantisation Error (QE) | 1 | [FS13] |
Dunn Index | 1 | [FS23] |
Root Mean Square Error (RMSE) | 4 | [FS17][FS22][FS50][FS52] |
S@K | 1 | [FS2] |
Topographic Error (TE) | 1 | [FS13] |
Intra-Cluster Variance (ICV) | 1 | [FS23] |
Normalised Discounted Cumulative Gain (NDCGn) | 1 | [FS8] |
Median Relative Error (MRE) | 2 | [FS17][FS40] |
Neuron Utilisation (NU) | 1 | [FS13] |
Average Item-Cluster Similarity (AICS) | 1 | [FS23] |
Normalized Mutual Information (NMI) | 1 | [FS46] |