How are computer-neural networks trained

Neural Networks

Typical workflow for neural network design

Each neural network application is unique. However, the general steps involved in developing the network are:

  1. Data access and preparation
  2. Creation of the artificial neural network
  3. Configure the network inputs and outputs
  4. Optimize the network parameters (the weights and bias values) to improve performance
  5. Training the network
  6. Validate the results of the network
  7. Integrating the network into a production system

Classification and clustering of flat networks

MATLAB and the Deep Learning Toolbox provide command line functions and apps for creating, training, and simulating flat neural networks. The apps make it easy to develop neural networks for tasks such as classification, regression (including time series regression), and clustering. After creating the networks using these tools, you can then automatically generate the MATLAB program code to record your work steps and automate tasks.

Pre-processing, post-processing and improving your network

The preprocessing of network inputs and targets increases the efficiency of training a flat neural network. The post-processing enables a detailed analysis of the network performance. MATLAB and Simulink® provide tools to:

  • reduce the dimensions of input vectors using principal component analysis
  • be able to perform a regression analysis between the network response and corresponding goals
  • Can scale inputs and targets to be in the range [-1,1]
  • Can normalize mean and standard deviation of the training data set
  • be able to use automatic data preprocessing and data splitting when creating your networks

Improving the ability to generalize the network avoids overfitting, which is often a problem when designing artificial neural networks. The overfitting is caused by the fact that a network has memorized the training set well, but has not yet learned to generalize new inputs. The overfitting is initially only a relatively small error in the training set. However, this error can become significantly larger when the network has to process new data. Learn more about using cross-validation to avoid overfitting.

You can use the following two solutions to improve the generalization:

  • By Regularization modifies the computing power function of the network (i.e. the measure of the error to be minimized by the training process). By including the sizes of weightings and bias values, a network is created using regularization that works very well with the training data and also shows smoother behavior with new data.
  • Stopping early works with two different sets of data, namely the training set to update the weights and bias values ​​and the validation set to stop training as soon as the network begins to over-adjust the data.