This paper utilized a newly proposed algorithm based on meta-heuristics for the training of multilayer perceptrons (MLP) that was developed using the idea of artificial gorilla troops optimizers. The precision and consistency of the proposed method's convergence as performance metrics. The Artificial Gorilla Troops Optimizer (GTO) was recently proposed for use in training MLP, and it employs the five most common classification data sets currently available( XOR, balloon, heart Iris, breast cancer) in the California University at Irvine UCI Repository. The newly Optimizers (GTO) are used for the first time as a Multi-Layer Perceptron (MLP) trainer, and its results are compared to those obtained using the more established grey wolf optimization (GWO), the whale optimization algorithm (WOA), and the sine cosine algorithm (SCA). Previously, GTO was used to determine the best weights and biases for the optimal solutions.