Given a set of sample data $\{{N}_{k}\}$, the ML estimation of the parameter
vector $\bm{\theta}$ is done by maximizing the likelihood function:

$$L(\bm{\theta}\{{N}_{k}\})=\prod _{k}p({N}_{k}{\lambda}_{k}(\bm{\theta}),r),$$ 

(2.31) 
where $p(N\lambda ,r)$ is the pdf of the sample value from the adopted
noise model (Equation 2.30). Mathematically equivalent,
but more convenient in practice, is to maximize the loglikelihood function:

$$\mathrm{\ell}(\bm{\theta}\{{N}_{k}\})=\sum _{k}\mathrm{ln}p({N}_{k}{\lambda}_{k}(\bm{\theta}),r).$$ 

(2.32) 
Using the modified Poissonian model, Equation 2.30, we have:

$$\mathrm{\ell}(\bm{\theta}\{{N}_{k}\})=\text{const}+\sum _{k}[({N}_{k}+{r}^{2})\mathrm{ln}({\lambda}_{k}(\bm{\theta})+{r}^{2}){\lambda}_{k}(\bm{\theta})],$$ 

(2.33) 
where the additive constant absorbs all terms that do not depend on
$\bm{\theta}$. (Remember that $r$ is never one of the free model
parameters.) The maximum of Equation 2.33 is obtained by solving the $n$
simultaneous likelihood equations

$$\frac{\partial \mathrm{\ell}(\bm{\theta}\{{N}_{k}\})}{\partial \bm{\theta}}=\mathbf{\U0001d7ce}.$$ 

(2.34) 
Using Equation 2.33, these equations become:

$$\sum _{k}\frac{{N}_{k}{\lambda}_{k}(\bm{\theta})}{{\lambda}_{k}(\bm{\theta})+{r}^{2}}\frac{\partial {\lambda}_{k}}{\partial \bm{\theta}}=\mathbf{\U0001d7ce}.$$ 

(2.35) 