Neuron (neural network)

From Maths
Revision as of 13:33, 22 April 2016 by Alec (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Definition

[math]\xymatrix{ I_{1} \ar[ddrr]^{w_{1} } \\ I_{2} \ar[drr]_{w_{2} } \\ \vdots & & *++[o][F-]{\sum} \ar[rr]^-{\mathcal{A}(\cdot)} & & (\text{Output}) \\ I_{n-1} \ar[urr]^{w_{n-1} } \\ I_{n} \ar[uurr]_{w_{n}} & & \text{Bias} \ar[uu]^{\theta} }[/math]
Block diagram of a generic neuron with inputs: [ilmath]I_1,\ldots,I_n[/ilmath]
A neuron in a neural network has:
  • an output domain, [ilmath]\mathcal{O} [/ilmath] typically [ilmath][-1,1]\subseteq\mathbb{R} [/ilmath] or [ilmath][0,1]\subseteq\mathbb{R} [/ilmath]
    • Usually [ilmath]\{0,1\} [/ilmath] for input and output neurons
  • some inputs, [ilmath]I_i[/ilmath], typically [ilmath]I_i\in\mathbb{R} [/ilmath]
  • some weights, 1 for each input, [ilmath]w_i[/ilmath], again [ilmath]w_i\in\mathbb{R} [/ilmath]
  • a way to combine each input with a weight (typically multiplication) ([ilmath]I_i\cdot w_i[/ilmath] - creating an "input activation", [ilmath]A_i\in\mathbb{R} [/ilmath]
  • a bias, [ilmath]\theta[/ilmath] (pf the same type as the result of combining an input with a weight. Typically this can be simulated by having a fixed "on" input, and treating the bias as another weight) - another input activation, [ilmath]A_0[/ilmath]
  • a way to combine the input values, typically: [ilmath]\sum_{j=0}^nA_j=\sum_{j=1}^nI_jw_j+\theta[/ilmath]
  • an activation function [ilmath]\mathcal{A}(\cdot):\mathbb{R}\rightarrow\mathcal{O}\subseteq\mathbb{R} [/ilmath], this maps the combined input activations to an output value.

In the example to the right, the output of the neuron would be:

  • [ilmath]\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)[/ilmath]

Specific models

For an exhaustive list see Category:Types of neuron in a neural network

McCulloch-Pitts neuron

[math]\xymatrix{ I_1 \ar[drr]^{w_1} & & \\ \vdots \ar@{.>}[rr] & & *++[o][F-]{\sum} \ar@{<.}[ll]!<0.5em,1.75em> \ar@{<.}[ll]!<0.5em,-1.75em> \ar@{<-}[d]!<0em,1em>_(.9){\theta} \ar[r]^-{\text{net}} & *+[o][F-]{\mathcal{A}(\cdot)} \ar[r] & \text{Output}\\ I_n \ar[urr]_{w_n} & & & } [/math]
Diagram of a McCulloch-Pitts neuron
The McCulloch-Pitts neuron has[1]:
  • Inputs: [ilmath](I_1,\ldots,I_n)\in\mathbb{R}^n[/ilmath]
    • Usually each [ilmath]I_i[/ilmath] is confined to [ilmath][0,1]\subset\mathbb{R} [/ilmath] or [ilmath][-1,1]\subset\mathbb{R} [/ilmath]
  • A set of weights, one for each input: [ilmath](w_1,\ldots,w_n)\in\mathbb{R}^n[/ilmath]
  • A bias: [ilmath]\theta\in\mathbb{R} [/ilmath]
  • An activation function, [ilmath]\mathcal{A}:\mathbb{R}\rightarrow\mathbb{R} [/ilmath]
    • It is more common to see [ilmath]\mathcal{A}:\mathbb{R}\rightarrow[-1,1]\subset\mathbb{R} [/ilmath] or sometimes [ilmath]\mathcal{A}:\mathbb{R}\rightarrow[0,1]\subset\mathbb{R} [/ilmath] than the entire of [ilmath]\mathbb{R} [/ilmath]

The output of the neuron is given by:

[math]\text{Output}:=\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)[/math]

References

  1. Neural Networks and Statistical Learning - Ke-Lin Du and M. N. S. Swamy

Template:Neural networks navbox



Template:Statistics Definition