Monday, March 2, 2020
Maximum Likelihood Estimation Examples
Maximum Likelihood Estimation Examples Suppose that we have a random sample from a population of interest.à We may have a theoretical model for the way that the population is distributed.à However, there may be several population parameters of which we do not know the values.à Maximum likelihood estimation is one way to determine these unknown parameters.à The basic idea behind maximum likelihood estimation is that we determine the values of these unknown parameters.à We do this in such a way to maximize an associated joint probability density function or probability mass function.à We will see this in more detail in what follows.à Then we will calculate some examples of maximum likelihood estimation. Steps for Maximum Likelihood Estimation The above discussion can be summarized by the following steps: Start with a sample of independent random variables X1, X2, . . . Xn from a common distribution each with probability density function f(x;à ¸1, . . .à ¸k).à The thetas are unknown parameters.Since our sample is independent, the probability of obtaining the specific sample that we observe is found by multiplying our probabilities together.à This gives us a likelihood function L(à ¸1, . . .à ¸k) à f( x1 ;à ¸1, . . .à ¸k) f( x2 ;à ¸1, . . .à ¸k) . . .à f( xn ;à ¸1, . . .à ¸k) à f( xi ;à ¸1, . . .à ¸k).Next, we use Calculus to find the values of theta that maximize our likelihood function L.à More specifically, we differentiate the likelihood function L with respect to à ¸ if there is a single parameter.à If there are multiple parameters we calculate partial derivatives of L with respect to each of the theta parameters.To continue the process of maximization, set the derivative of L (or partial derivatives) equal to zero and solve for theta.We can then use o ther techniques (such as a second derivative test) to verify that we have found a maximum for our likelihood function. Example Suppose we have a package of seeds, each of which has a constant probability p of success of germination.à We plant n of these and count the number of those that sprout.à Assume that each seed sprouts independently of the others.à How do we determine the maximum likelihood estimator of the parameter p? We begin by noting that each seed is modeled by a Bernoulli distribution with a success of p. We let X be either 0 or 1, and the probability mass function for a single seed is f( x ; p ) px (1 - p)1 - x.à Our sample consists of nà à different Xi, each of with has a Bernoulli distribution.à Theà seeds that sprout have Xi 1 and the seeds that fail to sprout have Xi 0.à The likelihood function is given by: L ( p ) à pxi (1 - p)1 - xi We see that it is possible to rewrite the likelihood function by using the laws of exponents.à L ( p ) à pà £ xi (1 - p)n - à £ xi Next we differentiate this function with respect to p.à We assume that the values for all of the Xi are known, and hence are constant.à To differentiate the likelihood function we need to use the product rule along with the power rule: L ( p ) à à £ xip-1 à £ xi (1 - p)n - à £ xi - (n - à £ xi )pà £ xi (1 - p)n-1 - à £ xi We rewrite some of the negative exponents and have: L ( p ) (1/p) à £ xipà £ xi (1 - p)n - à £ xi - 1/(1 - p) (n - à £ xi )pà £ xi (1 - p)n - à £ xi [(1/p) à £ xià - 1/(1 - p) (n - à £ xi)]ipà £ xi (1 - p)n - à £ xi Now, in order to continue the process of maximization, we set this derivative equal to zero and solve for p: 0 [(1/p) à £ xià - 1/(1 - p) (n - à £ xi)]ipà £ xi (1 - p)n - à £ xi Since p and (1- p) are nonzero we have that 0 (1/p) à £ xià - 1/(1 - p) (n - à £ xi). Multiplying both sides of the equation by p(1- p) gives us: 0 (1 - p) à £ xià - p (n - à £ xi). We expand the right hand side and see: 0 à à £ xià - p à £ xià - p n pà £ xi à à £ xi - p n. Thus à £ xi p n and (1/n)à £ xià p.à This means that the maximum likelihood estimator of p is a sample mean.à More specifically this is the sample proportion of the seeds that germinated.à This is perfectly in line with what intuition would tell us.à In order to determine the proportion of seeds that will germinate, first consider a sample from the population of interest. Modifications to the Steps There are some modifications to the above list of steps.à For example, as we have seen above, is typically worthwhile to spend some time using some algebra to simplify the expression of the likelihood function.à The reason for this is to make the differentiation easier to carry out. Another change to the above list of steps is to consider natural logarithms. The maximum for the function L will occur at the same point as it will for the natural logarithm of L.à Thus maximizing ln L is equivalent to maximizing the function L. Many times, due to the presence of exponential functions in L, taking the natural logarithm of L will greatly simplify some of our work. Example We see how to use the natural logarithm by revisiting the example from above.à We begin with the likelihood function: L ( p ) à pà £ xi (1 - p)n - à £ xi . We then use our logarithm laws and see that: R( p ) ln L( p ) à £ xi ln p (n - à £ xi) ln(1 - p). We already see that the derivative is much easier to calculate: R( p ) (1/p)à £ xi - 1/(1 - p)(n - à £ xi) . Now, as before, we set this derivative equal to zero and multiply both sides by p (1 - p): 0 (1- p ) à £ xi -à p(n - à £ xi) . We solve for p and find the same result as before. The use of the natural logarithm of L(p) is helpful in another way.à It is much easier to calculate a second derivative of R(p) to verify that we truly do have a maximum at the point (1/n)à £ xià p. Example For another example, suppose that we have a random sample X1, X2, . . . Xn from a population that we are modelling with an exponential distribution.à The probability density function for one random variable is of the form f( x ) à ¸-1 e -x/à ¸ The likelihood function is given by the joint probability density function. This is a product of several of these density functions: L(à ¸) à à ¸-1 e -xi/à ¸ à à ¸-n e -à £ xi/à ¸ à Once again it is helpful to consider the natural logarithm of the likelihood function.à Differentiating this will require less work than differentiating the likelihood function: R(à ¸) ln L(à ¸) ln [à ¸-n e -à £ xi/à ¸] We use our laws of logarithms and obtain: R(à ¸) ln L(à ¸) - n ln à ¸Ã -à £xi/à ¸ We differentiate with respect to à ¸ and have: R(à ¸)à - n / à ¸Ã à £xi/à ¸2 Set this derivative equal to zero and we see that: 0 - n / à ¸Ã à £xi/à ¸2. Multiply both sides by à ¸2 and the result is: 0 - n à ¸Ã à £xi. Now use algebra to solve for à ¸: à ¸ (1/n)à £xi. We see from this that the sample mean is what maximizes the likelihood function.à The parameter à ¸ to fit our model should simply be the mean of all of our observations. Connections There are other types of estimators.à One alternate type of estimation is called an unbiased estimator.à For this type, we must calculate the expected value of our statistic and determine if it matches a corresponding parameter.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.