Neuron (neural network)
From Maths
Definition
|
Block diagram of a generic neuron with [ilmath]n\in\mathbb{N} [/ilmath] inputs, [ilmath]I_1,\ldots,I_n[/ilmath] |
---|
- an output domain, [ilmath]\mathcal{O} [/ilmath] typically [ilmath][-1,1]\subseteq\mathbb{R} [/ilmath] or [ilmath][0,1]\subseteq\mathbb{R} [/ilmath]
- Usually [ilmath]\{0,1\} [/ilmath] for input and output neurons
- some inputs, [ilmath]I_i[/ilmath], typically [ilmath]I_i\in\mathbb{R} [/ilmath]
- some weights, 1 for each input, [ilmath]w_i[/ilmath], again [ilmath]w_i\in\mathbb{R} [/ilmath]
- a way to combine each input with a weight (typically multiplication) ([ilmath]I_i\cdot w_i[/ilmath] - creating an "input activation", [ilmath]A_i\in\mathbb{R} [/ilmath]
- a bias, [ilmath]\theta[/ilmath] (pf the same type as the result of combining an input with a weight. Typically this can be simulated by having a fixed "on" input, and treating the bias as another weight) - another input activation, [ilmath]A_0[/ilmath]
- a way to combine the input values, typically: [ilmath]\sum_{j=0}^nA_j=\sum_{j=1}^nI_jw_j+\theta[/ilmath]
- an activation function [ilmath]\mathcal{A}(\cdot):\mathbb{R}\rightarrow\mathcal{O}\subseteq\mathbb{R} [/ilmath], this maps the combined input activations to an output value.
In the example to the right, the output of the neuron would be:
- [ilmath]\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)[/ilmath]