Difference between revisions of "Neuron (neural network)"
From Maths
m |
m |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
==Definition== | ==Definition== | ||
<div style="float:right;margin:0.05em;"> | <div style="float:right;margin:0.05em;"> | ||
− | {| class="wikitable" border="1" | + | {| style="margin-bottom:0px;" class="wikitable" border="1" |
| <center><span style="font-size:1.1em"><mm>\xymatrix{ | | <center><span style="font-size:1.1em"><mm>\xymatrix{ | ||
I_{1} \ar[ddrr]^{w_{1} } \\ | I_{1} \ar[ddrr]^{w_{1} } \\ | ||
Line 23: | Line 23: | ||
In the example to the right, the output of the neuron would be: | In the example to the right, the output of the neuron would be: | ||
* {{M|1=\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)}} | * {{M|1=\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)}} | ||
+ | <div style="clear:both;"></div> | ||
==Specific models== | ==Specific models== | ||
For an exhaustive list see [[:Category:Types of neuron in a neural network]] | For an exhaustive list see [[:Category:Types of neuron in a neural network]] | ||
− | ==[[ | + | ===[[McCulloch-Pitts neuron]]=== |
− | {{: | + | {{:McCulloch-Pitts neuron/Definition}} |
==References== | ==References== | ||
<references/> | <references/> | ||
{{Neural networks navbox}} | {{Neural networks navbox}} | ||
− | {{CS Definition|Neural | + | {{CS Definition|Neural Network}}{{Statistics Definition|Neural Network}} |
Latest revision as of 13:33, 22 April 2016
Definition
|
Block diagram of a generic neuron with inputs: [ilmath]I_1,\ldots,I_n[/ilmath] |
---|
- an output domain, [ilmath]\mathcal{O} [/ilmath] typically [ilmath][-1,1]\subseteq\mathbb{R} [/ilmath] or [ilmath][0,1]\subseteq\mathbb{R} [/ilmath]
- Usually [ilmath]\{0,1\} [/ilmath] for input and output neurons
- some inputs, [ilmath]I_i[/ilmath], typically [ilmath]I_i\in\mathbb{R} [/ilmath]
- some weights, 1 for each input, [ilmath]w_i[/ilmath], again [ilmath]w_i\in\mathbb{R} [/ilmath]
- a way to combine each input with a weight (typically multiplication) ([ilmath]I_i\cdot w_i[/ilmath] - creating an "input activation", [ilmath]A_i\in\mathbb{R} [/ilmath]
- a bias, [ilmath]\theta[/ilmath] (pf the same type as the result of combining an input with a weight. Typically this can be simulated by having a fixed "on" input, and treating the bias as another weight) - another input activation, [ilmath]A_0[/ilmath]
- a way to combine the input values, typically: [ilmath]\sum_{j=0}^nA_j=\sum_{j=1}^nI_jw_j+\theta[/ilmath]
- an activation function [ilmath]\mathcal{A}(\cdot):\mathbb{R}\rightarrow\mathcal{O}\subseteq\mathbb{R} [/ilmath], this maps the combined input activations to an output value.
In the example to the right, the output of the neuron would be:
- [ilmath]\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)[/ilmath]
Specific models
For an exhaustive list see Category:Types of neuron in a neural network
McCulloch-Pitts neuron
|
Diagram of a McCulloch-Pitts neuron |
---|
- Inputs: [ilmath](I_1,\ldots,I_n)\in\mathbb{R}^n[/ilmath]
- Usually each [ilmath]I_i[/ilmath] is confined to [ilmath][0,1]\subset\mathbb{R} [/ilmath] or [ilmath][-1,1]\subset\mathbb{R} [/ilmath]
- A set of weights, one for each input: [ilmath](w_1,\ldots,w_n)\in\mathbb{R}^n[/ilmath]
- A bias: [ilmath]\theta\in\mathbb{R} [/ilmath]
- An activation function, [ilmath]\mathcal{A}:\mathbb{R}\rightarrow\mathbb{R} [/ilmath]
- It is more common to see [ilmath]\mathcal{A}:\mathbb{R}\rightarrow[-1,1]\subset\mathbb{R} [/ilmath] or sometimes [ilmath]\mathcal{A}:\mathbb{R}\rightarrow[0,1]\subset\mathbb{R} [/ilmath] than the entire of [ilmath]\mathbb{R} [/ilmath]
The output of the neuron is given by:
- [math]\text{Output}:=\mathcal{A}\left(\sum_{i=1}^n(I_iw_i)+\theta\right)[/math]
References
- ↑ Neural Networks and Statistical Learning - Ke-Lin Du and M. N. S. Swamy
Template:Neural networks navbox