Editing Artificial neural network potentials
Jump to navigation
Jump to search
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 3: | Line 3: | ||
<ref>[http://dx.doi.org/10.1016/0893-6080(89)90020-8 Kurt Hornik, Maxwell Stinchcombe, Halbert White "Multilayer feedforward networks are universal approximators", Neural Networks '''2''' pp. 359-366 (1989)]</ref> to an atomic or molecular potential energy surface. In particular the ''output layer'', or ''node'', provides an energy as a function of the coordinates, which form the ''input layer''. | <ref>[http://dx.doi.org/10.1016/0893-6080(89)90020-8 Kurt Hornik, Maxwell Stinchcombe, Halbert White "Multilayer feedforward networks are universal approximators", Neural Networks '''2''' pp. 359-366 (1989)]</ref> to an atomic or molecular potential energy surface. In particular the ''output layer'', or ''node'', provides an energy as a function of the coordinates, which form the ''input layer''. | ||
==Activation functions== | ==Activation functions== | ||
==Example== | ==Example== | ||
The output of a feedforward NN, having a single layer of hidden neurons, each having a sigmoid activation function and a linear output neuron, is given by: | The output of a feedforward NN, having a single layer of hidden neurons, each having a sigmoid activation function and a linear output neuron, is given by: |