2 Heteroskedasticity

Suppose the noise variance is itself variable.

Figure 2: Scatter-plot of n = 150 data points from the above model. (Here X isGaussian with mean 0 and variance 9.) Grey: True regression line. Dashed: ordinaryleast squares regression line.


 n=150
      x=rnorm(n,0,3)
      y = 3-2*x + rnorm(n,0,sapply(x,function(x){1+0.5*x^2}))
      
      # Plot the data
      plot(x,y)
      # Plot the true regression line
      abline(a=3,b=-2,col="grey")
      # Fit by ordinary least squares
      fit.ols = lm(y~x)
      # Plot that line
      abline(fit.ols,lty="dashed")

      





par(mfrow=c(1,2))
plot(x,residuals(fit.ols))
plot(x,(residuals(fit.ols))^2)

par(mfrow=c(1,1))




Figure 3: Residuals (left) and squared residuals (right) of the ordinary least squares
regression as a function of x. Note the much greater range of the residuals at large
absolute values of x than towards the center; this changing dispersion is a sign of

heteroskedasticity.



# Generate more random samples from the same model and the same x values,
      # but different y values
      # Inputs: number of samples to generate
      # Presumes: x exists and is defined outside this function
      # Outputs: errors in linear regression estimates
      ols.heterosked.example = function(n) {
        y = 3-2*x + rnorm(n,0,sapply(x,function(x){1+0.5*x^2}))
        fit.ols = lm(y~x)
        # Return the errors
        return(fit.ols$coefficients - c(3,-2))
      }
      # Calculate average-case errors in linear regression estimates (SD of
      # slope and intercept)
      # Inputs: number of samples per replication, number of replications (defaults
      # to 10,000)
      # Calls: ols.heterosked.example
      # Outputs: standard deviation of intercept and slope
      ols.heterosked.error.stats = function(n,m=10000) {
        ols.errors.raw = t(replicate(m,ols.heterosked.example(n)))
        # transpose gives us a matrix with named columns
        intercept.sd = sd(ols.errors.raw[,"(Intercept)"])
        slope.sd = sd(ols.errors.raw[,"x"])
        return(list(intercept.sd=intercept.sd,slope.sd=slope.sd))
      }
      
      

      $intercept.sd
[1] 0.6180515

$intercept.sd

[1] 0.6180515



2.1 Weighted Least Squares as a Solution to Heteroskedasticity


   # Plot the data
      plot(x,y)
      # Plot the true regression line
      abline(a=3,b=-2,col="grey")
      # Fit by ordinary least squares
      fit.ols = lm(y~x)
      # Plot that line
      abline(fit.ols,lty="dashed")
      fit.wls = lm(y~x, weights=1/(1+0.5*x^2))
      abline(fit.wls,lty="dotted")
      

      


Figure 6: Figure 2, plus the weighted least squares regression line (dotted)





### As previous two functions, but with weighted regression
# Generate random sample from model (with fixed x), fit by weighted least
# squares
# Inputs: number of samples
# Presumes: x fixed outside function
# Outputs: errors in parameter estimates
wls.heterosked.example = function(n) {
y = 3-2*x + rnorm(n,0,sapply(x,function(x){1+0.5*x^2}))
fit.wls = lm(y~x,weights=1/(1+0.5*x^2))
# Return the errors
return(fit.wls$coefficients - c(3,-2))
}
# Calculate standard errors in parameter estiamtes over many replications
# Inputs: number of samples per replication, number of replications (defaults
# to 10,000)
# Calls: wls.heterosked.example
# Outputs: standard deviation of estimated intercept and slope
wls.heterosked.error.stats = function(n,m=10000) {
wls.errors.raw = t(replicate(m,wls.heterosked.example(n)))
# transpose gives us a matrix with named columns
intercept.sd = sd(wls.errors.raw[,"(Intercept)"])
slope.sd = sd(wls.errors.raw[,"x"])
return(list(intercept.sd=intercept.sd,slope.sd=slope.sd))

}



2.2 Some Explanations for Weighted Least Squares





猜你喜欢

转载自blog.csdn.net/taojiea1014/article/details/80524999
2
>&2
α2
2-2