Unbiasedness Efficiency Sufficiency
1. **Stating the problem:** We have two unbiased estimators \( l_1 \) and \( l_2 \) for the mean \( \theta = 0 \) of a distribution, and we need to determine which estimator is better by comparing their unbiasedness and efficiency.
2. **Checking unbiasedness:**
- For \( l_1 = \frac{1}{2}X_1 + \frac{1}{6}X_2 + \frac{1}{3}X_3 \), the expected value is
$$ E[l_1] = \frac{1}{2}E[X_1] + \frac{1}{6}E[X_2] + \frac{1}{3}E[X_3] = \frac{1}{2}\cdot0 + \frac{1}{6}\cdot0 + \frac{1}{3}\cdot0 = 0 $$
so \( l_1 \) is unbiased.
- For \( l_2 = \frac{4X_1 - 2X_2 + X_3}{3} \), the expected value is
$$ E[l_2] = \frac{4E[X_1] - 2E[X_2] + E[X_3]}{3} = \frac{4\cdot0 - 2\cdot0 + 1\cdot0}{3} = 0 $$
so \( l_2 \) is also unbiased.
3. **Comparing efficiency:** Efficiency compares variances since lower variance means better estimator.
Given standard deviation \( \sigma = 2 \), variance \( \sigma^2 = 4 \).
- Variance of \( l_1 \):
$$ Var(l_1) = \left(\frac{1}{2}\right)^2 Var(X_1) + \left(\frac{1}{6}\right)^2 Var(X_2) + \left(\frac{1}{3}\right)^2 Var(X_3) = \frac{1}{4}\cdot4 + \frac{1}{36}\cdot4 + \frac{1}{9}\cdot4 = 1 + \frac{4}{36} + \frac{4}{9} = 1 + \frac{1}{9} + \frac{4}{9} = 1 + \frac{5}{9} = \frac{14}{9} \approx 1.5556 $$
- Variance of \( l_2 \):
$$ Var(l_2) = \frac{1}{9} [4^2 Var(X_1) + (-2)^2 Var(X_2) + (1)^2 Var(X_3)] = \frac{1}{9} [16\cdot4 + 4\cdot4 + 1\cdot4] = \frac{1}{9} [64 + 16 + 4] = \frac{84}{9} = \frac{28}{3} \approx 9.3333 $$
4. **Conclusion:** Both estimators are unbiased, but \( l_1 \) has lower variance. Therefore, \( l_1 \) is the better (more efficient) estimator.
---
5. **Stating the problem for (b):** We have a sample from a distribution with PDF
$$ f(x, \theta) = \theta x^{\theta - 1}, \quad 0 < x < 1 $$
We need to (i) find a sufficient statistic for \( \theta \) and (ii) find the method of moments estimator for \( \theta \).
6. **(b)(i) Finding sufficient statistic:**
- The likelihood function for sample \( X_1, \ldots, X_n \) is
$$ L(\theta) = \prod_{i=1}^n \theta X_i^{\theta - 1} = \theta^n \prod_{i=1}^n X_i^{\theta - 1} = \theta^n \left( \prod_{i=1}^n X_i \right)^{\theta - 1} $$
- Expressing the likelihood in terms of \( T = \prod_{i=1}^n X_i \), we see that the joint PDF depends on the data only through \( T \).
- By the factorization theorem, \( T = \prod_{i=1}^n X_i \) is a sufficient statistic for \( \theta \).
7. **(b)(ii) Method of moments estimator:**
- The population mean is
$$ E[X] = \int_0^1 x \theta x^{\theta - 1} dx = \theta \int_0^1 x^{\theta} dx = \theta \cdot \frac{1}{\theta + 1} = \frac{\theta}{\theta + 1} $$
- Sample mean is \( \bar{X} = \frac{1}{n} \sum X_i \). Equate to population mean:
$$ \bar{X} = \frac{\theta}{\theta + 1} $$
- Solve for \( \theta \):
$$ \bar{X}(\theta + 1) = \theta \implies \bar{X} \theta + \bar{X} = \theta \implies \bar{X} = \theta - \bar{X} \theta = \theta (1 - \bar{X}) $$
$$ \Rightarrow \theta = \frac{\bar{X}}{1 - \bar{X}} $$
- This is the method of moments estimator for \( \theta \).
**Final answers:**
(a) \( l_1 \) is the best estimator since both are unbiased but \( l_1 \) has lower variance.
(b)(i) The sufficient statistic is \( T = \prod_{i=1}^n X_i \).
(b)(ii) The method of moments estimator for \( \theta \) is \( \hat{\theta} = \frac{\bar{X}}{1 - \bar{X}} \).