Artificial Neural Networks (ANN) are designed to emulate the storageand learning mechanisms within biological brains. The standard ANNmodel is based upon summation, calculating the net input as theweighted sum of the inputs. Finding the optimal weight set is thetraining phase and is considered as a difficult global optimizationtask, on which there is a strong research. Many training algorithms,most of which were local optimizers have been proposed so far toimprove the performance of neural networks. Global optimizationmethods are under continuous development and lately, they have beenstudied on training ANN's. In this PhD thesis, a new swarm based onArtificial Bee Colony algorithm is proposed to training neuralnetworks and analyzing the performance of the algorithm besides thewell-known conventional and modern optimization techniques.The performance of ABC algorithm is firstly tested for trainingsummation-unit feedforward neural networks on three basic benchmarkproblems and the success of the algorithm is studied against localand global optimizers. Product-unit networks are based onmultiplicative nodes instead of additive ones, where the nonlinearbasis functions express the possible strong interactions betweenvariables. ABC algorithm is then applied on two forecastingbenchmark problems to determine the weights of the product-unit andsummation-unit feedforward network models.Multilayer Feedforward Perceptron (MLP), Learning VectorQuantization (LVQ), and Radial Basis Functions (RBF) are mostlypreferred ANN models in classification problems. In this study,Artificial Bee Colony algorithm is used in training MLP, LVQ, RBFnetworks on nine classification benchmark problems from UCIdatabase. The results indicate better performance of ABC algorithmin terms of generalization as well as the classification error thanother machine-learning algorithms. |