Poisson mixture

From Maths
Jump to: navigation, search
\newcommand{\P}[2][]{\mathbb{P}#1{\left[{#2}\right]} } \newcommand{\Pcond}[3][]{\mathbb{P}#1{\left[{#2}\!\ \middle\vert\!\ {#3}\right]} } \newcommand{\Plcond}[3][]{\Pcond[#1]{#2}{#3} } \newcommand{\Prcond}[3][]{\Pcond[#1]{#2}{#3} }
\newcommand{\E}[1]{ {\mathbb{E}{\left[{#1}\right]} } } \newcommand{\Mdm}[1]{\text{Mdm}{\left({#1}\right) } } \newcommand{\Var}[1]{\text{Var}{\left({#1}\right) } } \newcommand{\ncr}[2]{ \vphantom{C}^{#1}\!C_{#2} }
Stub grade: A**
This page is a stub
This page is a stub, so it contains little or minimal information and is on a to-do list for being expanded.The message provided is:
Warning:The notation and terminology used here was developed by me for a project, it has not been researched yet.

Before we start I must point out something which may without deeper thought appear to be a Poisson mixture, recall that:

  • The addition of two Poisson distributions is itself a Poisson distribution, and
  • TODO: Link here
    If we have a Poisson distribution and each of its events being noticed i.i.d and BORV(p) then the observed process is also a Poisson distribution
    • That is, let X\sim\text{Poi}(\lambda) and let (X_i)_{i\in\mathbb{N} } be the events detected by this "process". Let (D_i)_{i\in\mathbb{N} } be the detection of each event, i.i.d and \forall i\in\mathbb{N}\big[D_i\sim\text{Borv}(p)\big] then Y\sim\text{Poi}(p\lambda) is the distribution of the (D_i)_{i\in\mathbb{N} } events.
      • This is not phrased very well.

Definition

Let (\lambda_i)_{i\eq 1}^n\subseteq\mathbb{R}_{\ge 0} be given, and let \big(X_i\sim\text{Poi}(\lambda_i)\big)_{i\eq 1}^n be a sequence of \mathbb{N} -valued Poisson distributed random variables, then

Let \mathcal{C} be an N-valued random variable where N:\eq\{1,2,3,\ldots,n\}\subseteq\mathbb{N} ¸ next

  • Define X to be an \mathbb{N} -valued random variable as follows:
    • for any k\in\mathbb{N} , define: \P{X\eq k}:\eq \sum^n_{i\eq 1}\Big(\P{\mathcal{C}\eq i}\cdot\Pcond{X_i \eq k}{\mathcal{C}\eq i}\Big) - which is just a standard composition
      • Notice: \P{X\eq k}\eq\sum^n_{i\eq 1}\Big(\P{\mathcal{C}\eq i}\cdot\P{X_i\eq m}\Big)
        TODO: Justification?
        \eq \sum^n_{i\eq 1}\left(\P{\mathcal{C}\eq i}\cdot\frac{e^{-\lambda_i}\cdot \lambda_i^k}{k!} \right)
        \eq \frac{1}{k!}\sum^n_{i\eq 1}\Big(e^{-\lambda_i}\cdot\P{\mathcal{C}\eq i}\cdot \lambda_i^k\Big)

The idea is that we first let (x_i)_{i\eq 1}^n be a sample from the (X_i)_{i\eq 1}^n random variables. Let c be a sample of \mathcal{C} , then x, our sample of X is:

  • x:\eq x_c

Special case: n\eq 2

Let \P{\mathcal{C}\eq 1}:\eq p, thus \P{\mathcal{C}\eq 2}\eq 1-p, then:

  • \P{X\eq k}\eq\frac{1}{k!}\Big(p e^{-\lambda_1} \lambda_1^k+(1-p)e^{-\lambda_2}\lambda_2^k\Big)

Finding the parameters would be an optimisation problem (minimise error with 3 parameters here, \lambda_1,\ \lambda_2\in\mathbb{R}_{\ge 0} and p\in[0,1]\subseteq\mathbb{R}

Notes