Notes:Dual basis
- Note: this article is very pedantic. Which is why the concrete-to-abstract isomorphism for example is invoked.
TODO: Finish off and turn into a task
Basis
Let [ilmath](V,\mathcal{K})[/ilmath] be a finite dimensional vector space over the field, [ilmath]\mathcal{K} [/ilmath], suppose it has dimension [ilmath]n\in\mathbb{N} [/ilmath].
- Let [ilmath]E:=\{E_1,\ldots,E_n\}[/ilmath] be any basis of [ilmath]V[/ilmath].
- Suppose [ilmath]V^*[/ilmath] is the set consisting of all functions, [ilmath]f:V\rightarrow\mathcal{K} [/ilmath] which are linear maps.
- That is for [ilmath]f\in V^*[/ilmath] we have [ilmath]f:V\rightarrow\mathcal{K} [/ilmath] and:
- [ilmath]\forall \alpha,\beta\in\mathcal{K}\ \forall u,v\in V[f(\alpha u+\beta v)=\alpha f(u)+\beta f(v)][/ilmath] (the map is linear)
- That is for [ilmath]f\in V^*[/ilmath] we have [ilmath]f:V\rightarrow\mathcal{K} [/ilmath] and:
Then I claim:
- [ilmath]\{\varepsilon_1,\ldots,\varepsilon_n\} [/ilmath] is a basis of [ilmath]V^*[/ilmath] - the dual space to [ilmath]V[/ilmath] where we define:
- [ilmath]\varepsilon_i:V\rightarrow\mathcal{K}[/ilmath] (obviously [ilmath]\varepsilon_i\in V^*[/ilmath]) by [ilmath]\varepsilon_i:=\varepsilon_i'\circ \Phi_E[/ilmath].
- [ilmath]\Phi_E:V\rightarrow\mathcal{K}^n[/ilmath] is the concrete-to-abstract isomorphism[Note 1] given by: [ilmath]\Phi_E:v_1E_1+\cdots+v_nE_n\mapsto(v_1,\ldots,v_n)^T[/ilmath] (maps to a column vector) and
- [ilmath]\varepsilon_i':\mathcal{K}^n\rightarrow\mathcal{K} [/ilmath] defined by: [ilmath]\varepsilon_i':(v_1,\ldots,v_{i-1},v_i,v_{i+1},\ldots,v_n)\mapsto v_i[/ilmath]
- Thus [ilmath]\varepsilon_i:\sum_{j=1}^n v_jE_j\mapsto v_i[/ilmath]
- [ilmath]\varepsilon_i:V\rightarrow\mathcal{K}[/ilmath] (obviously [ilmath]\varepsilon_i\in V^*[/ilmath]) by [ilmath]\varepsilon_i:=\varepsilon_i'\circ \Phi_E[/ilmath].
Proof
The proof is done in 2 parts, first we must show that [ilmath]\text{Span}[/ilmath][ilmath](\{\varepsilon_1,\ldots,\varepsilon_n\})=V^*[/ilmath], then that the set of (co)vectors [ilmath]\{\varepsilon_1,\ldots,\varepsilon_n\} [/ilmath] are a linearly independent set.
Span part
- Let [ilmath]f\in V^*[/ilmath] be given. We will show this implies [ilmath]f\in\text{Span}(\{\varepsilon_1,\ldots,\varepsilon_n\})[/ilmath], and thus that [ilmath]\text{Span}(\{\varepsilon_1,\ldots,\varepsilon_n\})\subseteq V^*[/ilmath]
- First note that we can say [ilmath]f=g\iff \forall v\in V[f(v)=g(v)][/ilmath][Note 2] so rather than considering trying to show [ilmath](a,b)\in f\iff (a,b)\in g[/ilmath] as a relation we can instead deal with it as a map.
- Let [ilmath]v\in V[/ilmath] be given. Note that we can write [ilmath]v=\sum_{j=1}^n v_jE_j[/ilmath] which we shall write as [ilmath]\sum v_jE_j[/ilmath] for short on this page.
- Now [ilmath]f(v)=f(\sum v_jE_j)=\sum v_jf(E_j)[/ilmath] by linarity of [ilmath]f[/ilmath].
- Note that [ilmath]f(\alpha E_j)=k_j\varepsilon_j(\alpha E_j)[/ilmath] for some [ilmath]k_j\in\mathcal{K} [/ilmath]. Choose [ilmath]k_j:=\frac{f(E_j)}{\varepsilon_j(E_j)}[/ilmath] and the result follows[Note 3]. Thus:
- we see [ilmath]f(v)=f(\sum v_jE_j)=\sum v_jf(E_j)=\sum v_jk_j\varepsilon_j(E_j)=\sum (v_jk_j)\varepsilon_j(E_j)[/ilmath]
- Now [ilmath]f(v)=f(\sum v_jE_j)=\sum v_jf(E_j)[/ilmath] by linarity of [ilmath]f[/ilmath].
- Since [ilmath]v\in V[/ilmath] was arbitrary we have shown: [ilmath]\forall v\in V[f(v)=\left(\sum (\cdot k_j)\varepsilon_j(E_j)\right)(v)][/ilmath][Note 4]
- Thus we see [ilmath]f=\left(\sum (\cdot k_j)\varepsilon_j(E_j)\right)[/ilmath]
- Thus [ilmath]f\in\text{Span}(\{\varepsilon_1,\ldots,\varepsilon_n\})[/ilmath]
Going the other way, to show that [ilmath]\text{Span}(\{\varepsilon_1,\ldots,\varepsilon_n\})\subseteq V^*[/ilmath] is trivial. In fact as we know already [ilmath]V^*[/ilmath] is a vector space we might be able to show it just by applying what we already know! Either way, it's not hard.
Linear independence part
- Now we want to show that [ilmath]\{\varepsilon_1,\ldots,\varepsilon_n\}[/ilmath] is a linearly independent set. This means we want to show:
- [ilmath]\left(\left(\sum \alpha_i\varepsilon_i\right)=\underline{0}\right)\implies\left(\forall i\in\{1,\ldots,n\}[\alpha_i=0]\right)[/ilmath], here [ilmath]\underline{0} [/ilmath] denotes the map [ilmath]\underline{0}:V\rightarrow\mathbb{K} [/ilmath] by [ilmath]\underline{0}:v\mapsto 0[/ilmath], lets prove this:
- Let [ilmath]i\in\{1,\ldots,n\} [/ilmath] be given
- Suppose [ilmath]\alpha_i\ne 0[/ilmath]
- Choose [ilmath]v=0E_1+\cdots+OE_{i-1}+v_iE_i+0E_{i+1}+\cdots 0E_n[/ilmath] (which perhaps can be written better as: [ilmath](0,\ldots,0,v_i,0,\ldots,0)[/ilmath] where [ilmath]v_i\ne 0[/ilmath]
- Then by hypothesis: [ilmath]\left(\sum \alpha_j\varepsilon_j\right)(v)=0[/ilmath], so:
- [ilmath]\sum \alpha_j\varepsilon_j(v)=\sum \alpha_j\varepsilon_j(\sum_{k=1}^n v_iE^i)[/ilmath] blah blah blah[Note 5]
- We arrive at: [ilmath]\sum \alpha_j\varepsilon_j(v)=\alpha_iv_i[/ilmath], but as the LHS [ilmath]=0[/ilmath] we see we have:
- [ilmath]\alpha_iv_i=0[/ilmath]
- We know [ilmath]v_i\ne 0[/ilmath], so [ilmath]\alpha_iv_i=0[/ilmath] means we must have [ilmath]\alpha_i=0[/ilmath]. This contradicts that [ilmath]\alpha_i\ne 0[/ilmath], so we must have [ilmath]\alpha_i=0[/ilmath]
- Then by hypothesis: [ilmath]\left(\sum \alpha_j\varepsilon_j\right)(v)=0[/ilmath], so:
- Choose [ilmath]v=0E_1+\cdots+OE_{i-1}+v_iE_i+0E_{i+1}+\cdots 0E_n[/ilmath] (which perhaps can be written better as: [ilmath](0,\ldots,0,v_i,0,\ldots,0)[/ilmath] where [ilmath]v_i\ne 0[/ilmath]
- Suppose [ilmath]\alpha_i\ne 0[/ilmath]
- Since [ilmath]i[/ilmath] was arbitrary we see this is true for all alpha
Notes
- ↑ 's inverse - as we take the concrete to abstract function to be of the form [ilmath]:\mathcal{K}^n\rightarrow V[/ilmath]. However as it's an isomorphism there is no problem here. The inverse is a bijective linear map, just as the Conc-Abs function itself is.
- ↑ Prove this! Well it's not far from the definition really, [ilmath]f(v)=g(v)[/ilmath] means [ilmath]\exists y\in\mathcal{K} [/ilmath] such that [ilmath](v,y)\in f\wedge (v,y)\in g[/ilmath]. I could certainly phrase this better
- ↑ Prove this!
- ↑ The [ilmath]\cdot[/ilmath] is where the [ilmath]v[/ilmath] goes for function application. I chose a bad way to write this, however it is easy to show something like:
- [ilmath]\sum (v_jk_j)\varepsilon_j(E_j)=\sum k_j\varepsilon_j(v_jE_j)[/ilmath], then note:
- [ilmath]\varepsilon_j(v_jE_j)=\varepsilon_j(v)[/ilmath] so we could write say:
- [ilmath]\sum (v_jk_j)\varepsilon_j(E_j)=\left(\sum k_j\varepsilon_j(\cdot)\right)(v)[/ilmath]
- [ilmath]\varepsilon_j(v_jE_j)=\varepsilon_j(v)[/ilmath] so we could write say:
- [ilmath]\sum (v_jk_j)\varepsilon_j(E_j)=\sum k_j\varepsilon_j(v_jE_j)[/ilmath], then note:
- ↑ Flesh out