Point Mass Prior A blog about R and statistics
22 March 2014
It’s been a while since my last post which was on using the delta method in R with a specific application to finding the ‘x’ value that corresponds to the maximum/minimum value in a quadratic regression. This post will be about how to do the same thing in a slightly different way. Quadratic regression can be fit using a linear model of the form
where are independent and identically distributed normal random variables with mean 0 and a variance of . However, if our concern is on the ‘x’ value that provides the minimum/maximum value and possibly the value of the response at the minimum/maximum we can reformulate the model as
So that the x value that corresponds to the minimum/maximum is represented directly through the parameter . The actual minimum/maximum value is also represented as a parameter in the model as . In this case can be interpreted as half of the second derivative with respect to x but that isn’t as much of an interest here. Note that we can expand this model out to get the same form as the linear model so it really is representing the same model but notice that we don’t actually have a linear model anymore.
To fit this we would need to use something other than
lm. The natural choice in R is to use
nls. We’ll look at an example of how to fit this model and get confidence intervals for the quantities of interest. We’ll use the same simulated data as my previous post so we can compare how the delta method and nls compare for this problem.
As a reminder the exact model we fit previously was where so to write it using the same form as our nonlinear model we have . So the maximum occurs at and produces an output of 25 at that location.
set.seed(500) n <- 30 x <- runif(n, 0, 10) y <- -x * (x - 10) + rnorm(n, 0, 8) # y = 0 +10x - x^2 + error
Now we fit our model using
nls. We need to provide starting values for the parameters since it fits using an iterative procedure. I provide some pretty bad starting values here but it still fits its just fine.
o <- nls(y ~ t1 * (x - t2)^2 + t3, start = list(t1 = 1, t2 = 1, t3 = 1))
Now we can look at the output we get
## ## Formula: y ~ t1 * (x - t2)^2 + t3 ## ## Parameters: ## Estimate Std. Error t value Pr(>|t|) ## t1 -1.040 0.239 -4.35 0.00017 *** ## t2 5.190 0.265 19.57 < 2e-16 *** ## t3 25.089 2.661 9.43 4.9e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.71 on 27 degrees of freedom ## ## Number of iterations to convergence: 8 ## Achieved convergence tolerance: 2.22e-07
We see that the estimated value at which the maximum occurs is 5.1903. If we go back to the delta method post we see that we obtained the same estimate. Another interesting point is the the standard error for this term is the same as we obtained using the delta method. In both cases we get a standard error of 0.2652
We can easily obtain a confidence interval for this using
## Waiting for profiling to be done...
## 2.5% 97.5% ## t1 -1.530 -0.5499 ## t2 4.639 5.8815 ## t3 19.650 30.5620
Now recall that we used the asymptotic normality of the transformation applied when we used the delta method to obtain a confidence interval so that previous interval which went from 4.671 to 5.710 was based on a normal distribution assumption. When using confint with a nls object it uses t-based methods to get a confidence interval so it will be a little bit wider. Recall that we have the same estimate and the same standard error as when we used the delta method so if we want we could get the same interval based on asymptotic normality as well. Alternatively if you use
confint.default it will use a normal distribution to create your confidence intervals
## 2.5 % 97.5 % ## t1 -1.508 -0.5719 ## t2 4.671 5.7101 ## t3 19.874 30.3054
And here we see that we get the same confidence interval as when we used the asymptotic normality argument to get the confidence intervals for the delta method approach.comments
9 February 2013
Somebody recently asked me about the delta method and specifically the
deltamethod function in the msm package. I thought I would write about that and so to motivate this we’ll look at an example. The example we’ll consider is a simple case where we fit a quadratic regression to some data. This means our model has the form
where are independent and identically distributed normal random variables with mean 0 and a variance of .
To start we’ll generate some data such that we have roots at x=0 and x=10 and the quadratic is such that we have a maximum instead of a minimum.
set.seed(500) n <- 30 x <- runif(n, 0, 10) y <- -x * (x - 10) + rnorm(n, 0, 8) # y = 0 +10x - x^2 + error
We can plot the data to get a feel for it:
Now it might be that what we’re really interested in is the input value that gives us the maximum value for the response (on average). Let’s call that value . Now if we knew the true parameters for this data we could figure out exactly where that maximum occurs. We know that for a quadratic function the maximum value occurs at . In our specific case we have so the maximum occurs at . Just eyeballing our plot it doesn’t look like the quadratic that will be fit will give us a maximum that occurs exactly at . Let’s actually fit the quadratic regression and see what we get for the estimated value of which I will call .
# Estimate quadratic regression o <- lm(y ~ x + I(x^2)) # View the output summary(o)
## ## Call: ## lm(formula = y ~ x + I(x^2)) ## ## Residuals: ## Min 1Q Median 3Q Max ## -14.26 -6.13 -1.49 7.62 13.87 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.923 4.983 -0.59 0.56233 ## x 10.794 2.422 4.46 0.00013 *** ## I(x^2) -1.040 0.239 -4.35 0.00017 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.71 on 27 degrees of freedom ## Multiple R-squared: 0.424, Adjusted R-squared: 0.381 ## F-statistic: 9.93 on 2 and 27 DF, p-value: 0.000585
# Make scatterplot and add the estimated curve plot(x, y, main = "Scatterplot with estimated regression curve", xlim = c(0, 10)) curve(coef(o) + coef(o) * x + coef(o) * x^2, col = "red", add = T) # Add a line at the theoretical maximum abline(v = 5, col = "black") # Estimate the xmax value beta2 <- coef(o)["I(x^2)"] beta1 <- coef(o)["x"] estmax <- unname(-beta1/(2 * beta2)) # Add a line at estimated maximum abline(v = estmax, col = "blue", lty = 2) legend("topleft", legend = c("True max", "Estimated max", "Estimated curve"), col = c("black", "blue", "red"), lty = c(1, 2, 1))
So our estimate of the value where the maximum occurs is 5.1903. This is pretty close but it would still be nice to have some sort of interval to go along with our estimate. This is where the delta method can help us out. The delta method can be thought of as a way to get an estimated standard error for a transformation of estimated parameter values. In our case we’re interested in applying the function
to our estimated parameters.
To perform the delta method we need to know a little bit of calculus. The method requires taking derivatives of our function of interest. Now this isn’t too bad to do in practice but not everybody that wants to perform an analysis will know how to take derivatives (or at least it might have been a long time since they’ve taken a derivative).
Luckily for us we don’t have to do the delta method by hand though as long as we know the transformation of interest. The
deltamethod function in the msm package provides a convenient way to get the estimated standard error of the transformation as long as we can provide
- The transformation of interest
- The estimated parameter values
- The covariance matrix of the estimated parameters
We already know (1) but we have to make sure it write it in the proper syntax for
deltamethod. We can easily obtain (2) by using
coef on our fitted model and we can also easily obtain (3) by using
vcov on the estimated model.
When writing the syntax for the transformation when using the
deltamethod function you need to refer to the first parameter as
x1, the second parameter as
x2, and so on. So if I wanted to find the standard error for the sum of two parameters I would write that as
~ x1 + x2. In our case our estimated parameters are from the output of
coef(o) so let’s sneak a peak at them to remind ourselves the output order.
## (Intercept) x I(x^2) ## -2.923 10.794 -1.040
So in this case when writing our transformation we would refer to as
x2, and as
x3. As a reminder the transformation we applied was so the formula we want is
~ -x2 / (2 * x3).
library(msm) standerr <- deltamethod(~-x2/(2 * x3), coef(o), vcov(o)) standerr
##  0.2652
# Make a confidence interval (ci <- estmax + c(-1, 1) * qnorm(0.975) * standerr)
##  4.671 5.710
So we see that our confidence interval does contain the true value. We could also do a hypothesis test if we wanted to test against a certain value. Here we’ll test using a null hypothesis that the true .
# If we want to do a hypothesis test of Ho: xmax = 5 Ha: xmax != 5 z <- (estmax - 5)/standerr # Calculate p-value pval <- 2 * pnorm(-abs(z)) pval
##  0.4729
Our p-value of 0.473 doesn’t allow us to reject the null hypothesis in this situation.
So we can see that it’s fairly easy to implement the delta method in R. Now this isn’t necessarily my favorite way to get intervals for transformations of parameters but if you’re a frequentist then it can be quite useful.comments
12 November 2012
John Jackson and Jack Johnson were going head to head in a mayoral race. Ultimately John Jackson obtained 300 votes and Jack Johnson obtained 200 votes. While counting the votes they announced after each vote what the current results were. For example after they counted the first vote they announced something like:
“John Jackson: 1; Jack Johnson: 0”
and then after the second vote
“John Jackson: 1; Jack Johnson: 1”
and so on until they finally counted the last vote and announced
“John Jackson: 300; Jack Johnson: 200”
Not counting the very beginning when the results were “John: 0; Jack: 0” - What is the probability that at some point during the counting the result was a tie?
There is a nice solution to this that I will give in my next post. Try to figure it out in the meantime!comments
9 October 2012
My hackish way to get mathjax support on my blog
To get Mathjax support the easiest thing to do is to add the following to _include/themes/yourtheme/default.html where “yourtheme” is whichever theme you’re currently using.
I’m sure there is a better way to do this so that you don’t need to add this whenever you change themes but this is working for me for the time being.
However using Maruku (the default markdown rendering engine with Jekyll) actually using math can be a pain since it likes to replace
<em> no matter where it is in the post. So
$$x_i$$ gets converted to
$$x<em>i$$ which is not what we want.
To have a nice setup for using math in my posts I switched my markdown rendering engine from the default with Jekyll (Maruku) to kramdown. I just installed the kramdown gem
gem install kramdown # Although you probably need to do... sudo gem install kramdown
and then I needed to add
to the _config.yml file at the top directory of my repository.
After that using Mathjax is as simple as surrounding any math in double dollar signs
$$ insert_math_here $$ so that
$$x^2$$ gets rendered as . If your dollar signs are standalone not inline with other text like this:
$$a^2 + b^2 = c^2$$
then it will get centered and displayed like “display math” like:
Combining this with the fact that I write my posts using R Markdown so I can easily insert code and output into my posts makes the whole process a lot nicer.
I have heard good things about rdiscount as a markdown rendering engine but I don’t think I’ll use it. It doesn’t appear to support the syntax I described in this post and I rather like this syntax. kramdown appears to work well enough for my needs so I think I’ll stick with that for now.comments
7 October 2012
A pointless exercise in code obfuscation
As my first actual post on this new blog I thought I would do something very silly and pointless. What better way to start off a new adventure than to write some very obfuscated code. The following is a mess and it is meant to be that way. To be honest I wrote this a while ago and I have a slightly decrypted copy that still doesn’t even make sense. Enjoy!
h=character;r=rep;a=b=h(0);p=options()$width%/%2-5;n=" ";j=r(toupper(substring(mode(a),4,4)),sum(r(5:9,2)+1)-3) k=r(5:9,2);k[4:5]=7;k=cumsum(k+1);j[k]=n;m=paste(h(1), h(1));s=c(0,k[-10])+1;j[c(16:17,24:26,32:33,46:47,53:55, 61:64,70:74)]=m;for(i in 1:10)a=c(a,r(m,p),j[s[i]:k[i]]) cat(c(n,a),sep=b)
## ## RRRRR ## RRRRRR ## RR RRR ## RR RR ## RR RRR ## RRRRR ## RR RR ## RR RR ## RR RR ## RR RR