Due by Thursday, November 21, 2019
Answers may be longer than I would deem sufficient on an exam. Some might vary slightly based on points of interest, examples, or personal experience. These suggested answers are designed to give you both the answer and a short explanation of why it is the answer.
In your own words, describe what the “dummy variable trap” means. What precisely is the problem, and what is the standard way to prevent it?
The dummy variable trap is a specific form of perfect multicollinearity, which prevents an OLS regression from making logical sense. If two (or more) variables can be expressed as an exact linear function of one another, we cannot estimate a regression. In terms of dummy variables, we could have dummy variables for each category an observation i
can fall into, such as the four seasons:
Spring
(=1 if i
is in Spring
, =0 if not)Summer
(=1 if i
is in Summer, =0 if not)Fall
(=1 if i
is in Fall, =0 if not)Winter
(=1 if i
is in Winter, =0 if not)If we were to include all four dummy variables in the regression, each and every observation will have the following property:
Springi+Summeri+Falli+Winteri=1
That is, for each and every observation i
, the sum of the four dummy variable values must always equal 1 (because every observation must be either Spring, Summer, Fall, or Winter).
Our constant (formerly called the intercept), β0, is the same value for every observation, as if we had a dummy variable X0 that always equals 1 for each observation i
. Now with the 4 season dummies, we have a perfectly collinear relationship between the constant and the sum of the dummies: 1=X0i=Springi+Summeri+Falli+Winteri
In any case, we need to drop one of these variables (either X0 or one of the dummies). If we do this, then we break the collinearity. For example, if we drop Summeri, then the sum of Springi+Falli+Winteri≠1, since for observations in Summer, this equation=0.
In your own words, describe what an interaction term is used for, and give an example. You can use any type of interaction to explain your answer.
Suppose we have a model of: Y=β0+β1X1+β2X2
^β1 tells us the marginal effect of X1→Y holding X2 constant. However, it might be that the effect of X1→Y may be different for different values of X2. We can explore this possibility with an interaction term, adding:
Y=β0+β1X1+β2X2+β3X1∗X2 Now, the marginal effect of on Y of a change in X1:
ΔYΔX1=β1+β3X2
So, the effect depends on what X2 is. β3 on the interaction term describes the increment to the effect of X1 on Y that depends on the value of X2.
Note X1 and X2 could each be a continuous variable or a dummy variable, so we could have three possible combinations for an interaction effect:
wage
and experience
male
(=1 if male, =0 if female) and married
(=1 if married, =0 if unmarried)male
and experience
)In your own words, describe when and why using logged variables can be useful.
Logged variables can help us out for theoretical reasons or data-fitting reasons.
Obviously, if a log function appears to fit nonlinear data better than a linear function (or even a polynomial), then using a log function is a good idea.
Sometimes we need functional forms that behave in a way that would be predicted by theory, such as diminishing returns, where the effect of something is always positive, but always growing smaller (such as the marginal product of labor or marginal utility). A log function always increases at a decreasing rate.
Additionally, logs can allow us to talk about percentage changes in variables, rather than unit changes. If we have both X and Y logged, we can talk about the elasticity between X and Y, a 1% change in X leads to a ^β1% change in Y.
In your own words, describe when we would use an F-test, and give some example (null) hypotheses. Describe intuitively and specifically (no need for the formula) what exactly F is trying to test for.
F-tests are used to test -hypotheses that hypothesizes values for parameters. This is as opposed to an ordinary t-test which only hypothesizes a value for one parameter (e.g. H0: β1=0).
We can run F-tests for a few different scenarios, but the most common is checking whether multiple variables are jointly-significant. The null hypothesis would be that several parameters are equal to 0, e.g.: H0: β1=0;β2=0
The formula that calculates the value of the F-statistic essentially measures if the R2 improves by a statistically significant amount when we move from the restricted regression (under the null hypothesis) to the unrestricted regression (where the null hypothesis is false).
For example, for a regression: ^Yi=β0+β1X1i+β2X2i+β3X3i+β4X4i
the null hypothesis: H0: β1=0;β2=0;
would produce a restricted regression of: ^Yi=β0+β3X3i+β4X4i
and compare the R2 of this regression to the R2 of the original (unrestricted) regression. If the R2 improves enough, the F-statistic is large enough, and hence we can reject the null hypothesis and conclude that X1i and X2i are jointly significant (we should not omit them from our regression).
For the following questions, please show all work and explain answers as necessary. You may lose points if you only write the correct answer. You may use R
to verify your answers, but you are expected to reach the answers in this section “manually.”
Suppose data on many countries’ legal systems (Common Law or Civil Law) and their GDP per capita gives us the following summary statistics:
Legal System | Avg. GDP Growth Rate | Std. dev. | n |
---|---|---|---|
Common Law | 1.84 | 3.55 | 19 |
Civil Law | 4.97 | 4.27 | 141 |
Difference | −3.13 | 1.02 | − |
Using the group means, write a regression equation for a regression of GDP Growth rate on Common Law. Define
Common Lawi={1if country i has common law0if country i has civil law
^GDP Growth Ratei=4.97−3.13Common Lawi
How do we use the regression to find the average GDP Growth rate for common law countries? For civil law countries? For the difference?
This helps you construct the regression in part (a). The trick here is to recognize what each coefficient describes:
Looking at the coefficients, does there appear to be a statistically significant difference in average GDP Growth Rates between Civil and Common law countries?
For this, we want to look at the coefficient and standard error for β3, which measures the difference in GDP Growth Rate between Civil and Common law countries. The standard error of ^β3 is given by the standard error of the difference in means (1.02). We run a t-test:
t=−3.131.02≈3.07
This is a fairly large t-statistic, and so it is likely to be statistically significant.
Is the estimate on the difference likely to be unbiased? Why or why not?
Whether or not a country is common law or civil law is likely to be endogenous. There are a host of other variables that are correlated both with a country’s legal system and its GDP growth rate – who it was colonized by, what its history is, geography, etc.
Now using the same table above, reconstruct the regression equation if instead of Common Law, we had used:
Civil Lawi={1if country i has civil law0if country i has common law
^GDP Growth Ratei=1.84+3.13Civil Lawi
Suppose a real estate agent collects data on houses that have sold in a particular neighborhood over the past year, with the following variables:
Variable | Description |
---|---|
Priceh | price of house h (in thousands of $) |
Bedroomsh | number of bedrooms in house h |
Bathsh | number of bathrooms in house h |
Poolh | {=1if house h has a pool=0if house h does not have a pool |
Viewh | {=1if house h has a nice view=0if house h does not have a nice view |
Suppose she runs the following regression: ^Priceh=119.20+29.76Bedroomsh+24.09Viewh+14.06(Bedroomsh×Viewh)
What does each coefficient mean?
**Write out two separate regression equations, one for houses _*with a nice view, and one for homes without a nice view. Explain each coefficient in each regression.**
For houses without a nice view (Viewh=0):
^Priceh=119.20+29.76Bedroomsh+24.09(0)+14.06Bedroomsh(0)=119.20+29.76Bedroomsh
Houses without a nice view sell for $119.20 thousand for no bedrooms, and gain $29.76 thousand for every additional bedroom.
For houses with a nice view (Viewh=1):
^Priceh=119.20+29.76Bedroomsh+24.09(1)+14.06Bedroomsh(1)=119.20+29.76Bedroomsh+24.09+14.06Bedroomsh=(119.20+24.09)+(29.76+14.06)Bedroomsh=143.29+43.82Bedroomsh
Houses with a nice view sell for $143.29 thousand for no bedrooms, and gain $43.82 thousand for every additional bedroom.
Suppose she runs the following regression:
^Priceh=189.20+42.40Poolh+12.10Viewh+12.09(Poolh×Viewh)
What does each coefficient mean?
Find the expected price for:
Suppose she runs the following regression: ^Priceh=87.90+53.94Bedroomsh+15.29Bathsh+16.19(Bedroomsh×Bathsh)
What is the marginal effect of adding an additional bedroom if the house has 1 bathroom? 2 bathrooms? 3 bathrooms?
What is the marginal effect of adding an additional bathroom if the house has 1 bedroom? 2 bedrooms? 3 bedrooms?
Suppose we want to examine the change in average global temperature over time. We have data on the deviation in temperature from pre-industrial times (in Celcius), and the year.
Suppose we estimate the following simple model relating deviation in temperature to year:
^Temperaturei=−10.46+0.006Yeari
Interpret the coefficient on Year (i.e. ^β1)
Every (additional) year, temperature increases by 0.006 degrees.
Predict the (deviation in) temperature for the year 1900 and for the year 2000.
In 1900: ^Temperature1900=−10.46+0.006(1900)=−10.46+11.4=0.94 degrees
In 2000: ^Temperature2000=−10.46+0.006(2000)=−10.46+12=1.54 degrees
Suppose we believe temperature deviations are increasing at an increasing rate, and introduce a quadratic term and estimate the following regression model:
^Temperaturei=155.68−0.116Yeari+0.000044Year2i
What is the marginal effect on (deviation in) global temperature of one additional year elapsing?
The marginal effect is measured by the first derivative of the regression equation with respect to Year. But you can just remember the resulting formula and plug in the parameters:
dYdX=β1+2β2XdTemperaturedYear=−0.116+2(0.000044)Year=0.116+0.000088Year
Predict the marginal effect on temperature of one more year elapsing starting in 1900, and in 2000.
For 1900:
dTemperaturedYear=−0.116+0.000088(1900)=−0.116+0.1672=0.0512 degrees
For 2000:
dTemperaturedYear=−0.116+0.000088(2000)=−0.116+0.176=0.06 degrees
Our quadratic function is a U-shape. According to the model, at what year was temperature (deviation) at its minimum?
We can set the derivative equal to 0, or you can just remember the formula and plug in the parameters:
dYdX=β1+2β2X0=β1+2β2X−β1=2β2X−12×β1β2=Year∗−12×(−0.116)(0.000044)=Year∗−12×−2636≈Year∗1318≈Year∗
Suppose we want to examine the effect of cell phone use while driving on traffic fatalities. While we cannot measure the amount of cell phone activity while driving, we do have a good proxy variable, the number of cell phone subscriptions (in 1000s) in a state, along with traffic fatalities in that state.
Suppose we estimate the following simple regression:
^fatalitiesi=123.98+0.091cell plansi
Interpret the coefficient on cell plans (i.e. ^β1)
For every additional (thousand) cell phone plans, traffic fatalities increase by 0.091.
Now suppose we estimate the regression using a linear-log model:
^fatalitiesi=−3557.08+515.81ln(cell plansi)
Interpret the coefficient on ln(cell plans) (i.e. ^β1)
A 1% increase in cell phone plans will increase traffic fatalities by 515.81100=5.1581.
Now suppose we estimate the regression using a log-linear model:
^ln(fatalitiesi)=5.43+0.0001cell plansi
Interpret the coefficient on cell plans (i.e. ^β1)
For every additional (thousand) cell phone plans, traffic fatalities will increase by 0.001×100%=0.1%.
Now suppose we estimate the regression using a log-log model:
^ln(fatalitiesi)=−0.89+0.85ln(cell plansi)
Interpret the coefficient on cell plans (i.e. ^β1)
This is an elasticity. A 1% increase in cell phone plans will cause a 0.85% increase in traffic fatalities.
Suppose we include several other variables into our regression and want to determine which variable(s) have the largest effects, a State’s cell plans, population, or amount of miles driven. Suppose we decide to standardize the data to compare units, and we get:
^fatalitiesi=4.35+0.002cell plansstd−0.00007populationstd+0.019miles drivenstd
Interpret the coefficients on cell plans, population, and miles driven. Which has the largest effect on fatalities?
It appears that if a State has more miles driven in it, it will have the strongest effect on fatalities.
Suppose we wanted to make the claim that it is only miles driven, and neither population nor cell phones determine traffic fatalities. Write (i) the null hypothesis for this claim and (ii) the estimated restricted regression equation.*
H0: β1=β2=0
Restricted Regression: ^fatalitiesi=4.35+0.019miles drivenstd
Suppose the R2 on the original regression from (e) was 0.9221, and the R2 from the restricted regression is 0.9062. With 50 observations, calculate the F-statistic.
Note q=2 as we are hypothesizing values for 2 parameters, k=3 as we have 3 variables in our unrestricted regression (cell plans, population, and miles driven).
Fq,n−k−1=(R2u−R2r)q(1−R2u)(n−k−1)F2,50−3−1=(0.9221−0.9062)2(1−0.9221)(50−3−1)F2,46=(0.0159)2(0.0779)(46)=0.007950.00169≈4.70
The number here does not mean anything literally. The purpose of this exercise is to get you to try to understand exactly how F is calculated and how to think about it. Essentially, we are seeing if the R2 on the unrestricted model has statistically significantly increased from the R2 on the restricted model. The larger the F-statistic is, the larger the increase. We can also check statistical significance by calculating the associated p-value.
# FYI: We could estimate the p-value in R if we wanted
# - similar to calculating p-value for a t-statistic
# - syntax is:
# pf(F.value, df1, df2, lower.tail = FALSE)
# - we plug in our calculated F-statistic (above) for F.value
# - df1 and df2 are degrees of freedom from numerator and denominator
# - lower.tail=FALSE implies we are looking at probability to the RIGHT
# of F-value on F-distribution, p(f>F) that f is higher than our F
pf(4.70, 2,46, lower.tail=FALSE)
## [1] 0.01389011
Answer the following questions using R
. When necessary, please write answers in the same document (knitted Rmd
to html
or pdf
, typed .doc(x)
, or handwritten) as your answers to the above questions. Be sure to include (email or print an .R
file, or show in your knitted markdown
) your code and the outputs of your code with the rest of your answers.
Lead is toxic, particularly for young children, and for this reason government regulations severely restrict the amount of lead in our environment. In the early part of the 20th century, the underground water pipes in many U.S. cities contained lead, and lead from these pipes leached into drinking water. This exercise will have you investigate the effect of these lead pipes on infant mortality. This dataset contains data on:
Variable | Description |
---|---|
infrate |
infant mortality rate (deaths per 100 in population) |
lead |
=1 if city has lead water pipes, =0 if did not have lead pipes |
pH |
water pH |
and several demographic variables for 172 U.S. cities in 1900.
Part A
Using R
to examine the data, find the average infant mortality rate for cities with lead pipes and for cities without lead pipes. Calculate the difference, and run a t-test to determine if this difference is statistically significant.
## ── Attaching packages ──────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.2.0 ✔ purrr 0.3.3
## ✔ tibble 2.1.3 ✔ dplyr 0.8.3
## ✔ tidyr 1.0.0 ✔ stringr 1.4.0
## ✔ readr 1.3.1 ✔ forcats 0.4.0
## ── Conflicts ─────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## Warning: Missing column names filled in: 'X1' [1]
## Parsed with column specification:
## cols(
## X1 = col_double(),
## year = col_double(),
## city = col_character(),
## state = col_character(),
## age = col_double(),
## hardness = col_double(),
## ph = col_double(),
## infrate = col_double(),
## typhoid_rate = col_double(),
## np_tub_rate = col_double(),
## mom_rate = col_double(),
## population = col_double(),
## precipitation = col_double(),
## temperature = col_double(),
## lead = col_double(),
## foreign_share = col_double()
## )
## [1] 0.02208973
Cities with lead pipes have an infant mortality rate of 0.40, and cities without lead pipes have an infant mortality rate of 0.38. So the difference is 0.02.
##
## Welch Two Sample t-test
##
## data: infrate by lead
## t = -0.90387, df = 109.29, p-value = 0.3681
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.07052551 0.02634606
## sample estimates:
## mean in group 0 mean in group 1
## 0.3811679 0.4032576
We get a t-statistic of −0.90 and a p-value of 0.3681, so the difference is not statistically significant.
Run a regression of infrate
on lead
, and write down the estimated regression equation. Use the regression coefficients to find:
##
## Call:
## lm(formula = infrate ~ lead, data = lead)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.27141 -0.10643 -0.01238 0.07528 0.44121
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.38117 0.02042 18.669 <2e-16 ***
## lead 0.02209 0.02475 0.892 0.373
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1514 on 170 degrees of freedom
## Multiple R-squared: 0.004662, Adjusted R-squared: -0.001193
## F-statistic: 0.7963 on 1 and 170 DF, p-value: 0.3735
^Infratei=0.38+0.02Leadi
Does the pH of the water matter? Include ph
in your regression from part B. Write down the estimated regression equation, and interpret each coefficient (note there is no interaction effect here). What happens to the estimate on lead
?
##
## Call:
## lm(formula = infrate ~ lead + ph, data = lead)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.27074 -0.09140 -0.01517 0.07909 0.34222
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.17817 0.10676 11.035 < 2e-16 ***
## lead 0.04993 0.02177 2.294 0.023 *
## ph -0.11143 0.01472 -7.570 2.31e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1312 on 169 degrees of freedom
## Multiple R-squared: 0.2567, Adjusted R-squared: 0.2479
## F-statistic: 29.18 on 2 and 169 DF, p-value: 1.299e-11
^Infratei=1.17+0.05Leadi−0.11pHi
The estimate on lead
doubled and became significant at the 5% level.
The amount of lead leached from lead pipes normally depends on the chemistry of the water running through the pipes: the more acidic the water (lower pH), the more lead is leached. Create an interaction term between lead and pH, and run a regression of infrate
on lead
, pH
, and your interaction term. Write down the estimated regression equation. Is this interaction significant?
##
## Call:
## lm(formula = infrate ~ lead + ph + lead:ph, data = lead)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.27492 -0.09502 -0.00266 0.07965 0.35139
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.91890 0.17447 5.267 4.2e-07 ***
## lead 0.46180 0.22122 2.087 0.03835 *
## ph -0.07518 0.02427 -3.098 0.00229 **
## lead:ph -0.05686 0.03040 -1.871 0.06312 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1303 on 168 degrees of freedom
## Multiple R-squared: 0.2719, Adjusted R-squared: 0.2589
## F-statistic: 20.91 on 3 and 168 DF, p-value: 1.467e-11
^Infratei=0.92+0.46Leadi−0.08pHi−0.06(Lead×pH)
We see that this interaction is just barely insignificant, with a p-value of 0.06.
What we actually have are two different regression lines. Visualize this with a scatterplot between infrate
(Y) and ph
(X) by lead
.
lead_scatter<-ggplot(data = lead)+
aes(x = ph,
y = infrate)+
geom_point(aes(color = as.factor(lead)))+ # making it a factot makes color discrete rather than continuous!
geom_smooth(method = "lm")+
# now I'm just making it pretty
# changing color
scale_color_viridis_d("Pipes",
labels=c("0"="Not Lead", "1"="Lead"))+ # changing labels for colors
labs(x = "pH",
y = "Infant Mortality Rate")+
theme_classic(base_family = "Fira Sans Condensed",
base_size=20)
lead_scatter
Do the two regression lines have the same intercept? The same slope? Use the original regression in part D to test these possibilities.
β1 (on lead
) measures the difference in intercept between the lead & no lead regression lines. So we would want to test:
H0:β1=0Ha:β1≠0
The R
output tells us ^β1 is 0.46 with a standard error of 0.22, so the t-statistic for this test is 2.09 with a p-value of 0.04, so there is a statistically significant difference at the 5% level.
β3 (on the interaction term) measures the difference in slope between the lead & no lead regression lines. So we would want to test:
H0:β3=0Ha:β3≠0
The output tells us ^β3 is −0.06 with a standard error of 0.3, so the t-statistic for this test is −1.87 with a p-value of 0.06, so there is not a statistically significant difference at the 5% level.
Therefore, they have difference intercepts, and the same slopes, statistically.
Take your regression equation from part D and rewrite it as two separate regression equations (one for no lead and one for lead). Interpret the coefficients for each.
For no lead (lead=0
):
^Infrate=0.92+0.46Lead−0.08pH−0.06(Lead×pH)=0.92+0.46(0)−0.08pH−0.06((0)×pH)=0.92−0.08pH
For lead (lead=1
):
^Infrate=0.92+0.46Lead−0.08pH−0.06(Lead×pH)=0.92+0.46(1)−0.08pH−0.06((1)×pH)=(0.92+0.46)+(−0.08−0.06)pH=1.30−0.14pH
Cities without lead pipes have an infant mortality rate of 0.92 (^β0 from reg 1) vs. cities with lead pipes have an infant mortality rate of 1.30 (^β0 from reg 2). For every additional unit of pH, the infant mortality rate of a city without lead pipes would decrease by 0.06 (^β1 from reg 1) vs. a city with lead pipes which would see a fall of 0.14.
So again, we can see the difference in infant mortality rates between cities with lead pipes vs. those that don’t, with a water pH of 0 is 1.30−0.92=$0.38, (^β0 from the regression in (part d)), and the cities with lead raise their infant mortality rates by 0.14−0.08=0.06 more from each unit of pH than cities without lead do (^β3) from the regression in (part d)).
Double check your calculations in G are correct by running the regression in D twice, once for cities without lead pipes and once for cities with lead pipes.1
##
## Call:
## lm(formula = infrate ~ lead + ph + lead:ph, data = .)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.23779 -0.09733 -0.01506 0.07053 0.35139
##
## Coefficients: (2 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.91890 0.18543 4.955 7.77e-06 ***
## lead NA NA NA NA
## ph -0.07518 0.02579 -2.915 0.00521 **
## lead:ph NA NA NA NA
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1385 on 53 degrees of freedom
## Multiple R-squared: 0.1381, Adjusted R-squared: 0.1219
## F-statistic: 8.495 on 1 and 53 DF, p-value: 0.005206
##
## Call:
## lm(formula = infrate ~ lead + ph + lead:ph, data = .)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.27492 -0.08881 0.00664 0.08059 0.31646
##
## Coefficients: (2 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.38070 0.13189 10.47 < 2e-16 ***
## lead NA NA NA NA
## ph -0.13204 0.01775 -7.44 1.97e-11 ***
## lead:ph NA NA NA NA
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1263 on 115 degrees of freedom
## Multiple R-squared: 0.325, Adjusted R-squared: 0.3191
## F-statistic: 55.36 on 1 and 115 DF, p-value: 1.967e-11
Use huxtable
to make a nice output table of all of your regressions from parts B, C, and D.
##
## Attaching package: 'huxtable'
## The following object is masked from 'package:dplyr':
##
## add_rownames
## The following object is masked from 'package:purrr':
##
## every
## The following object is masked from 'package:ggplot2':
##
## theme_grey
(1) | (2) | (3) | |
Constant | 0.38 *** | 1.18 *** | 0.92 *** |
(0.02) | (0.11) | (0.17) | |
Lead Pipes | 0.02 | 0.05 * | 0.46 * |
(0.02) | (0.02) | (0.22) | |
pH | -0.11 *** | -0.08 ** | |
(0.01) | (0.02) | ||
Lead * pH | -0.06 | ||
(0.03) | |||
N | 172 | 172 | 172 |
R-Squared | 0.00 | 0.26 | 0.27 |
SER | 0.15 | 0.13 | 0.13 |
*** p < 0.001; ** p < 0.01; * p < 0.05. |
Let’s look at economic freedom and GDP per capita using some data I sourced from Gapminder2, Freedom House3 and Fraser Institute Data4 and cleaned up for you, with the following variables:
Variable | Description |
---|---|
Country |
Name of country |
ISO |
Code of country (good for plotting) |
econ_freedom |
Economic Freedom Index score (2016) from 1 (least) to 10 (most free) |
pol_freedom |
Political freedom index score (2018) from 1 (least) top 10 (most free) |
gdp_pc |
GDP per capita (2018 USD) |
continent |
Continent of country |
Does economic freedom affect GDP per capita? Create a scatterplot of gdp_pc
(Y
) against econ_freedom
(x
). Does the effect appear to be linear or nonlinear?
## Parsed with column specification:
## cols(
## Country = col_character(),
## ISO = col_character(),
## econ_freedom = col_double(),
## gdp_pc = col_double(),
## continent = col_character(),
## pol_freedom = col_double()
## )
# scatterplot
freedom_plot<-ggplot(data = freedom)+
aes(x = econ_freedom,
y = gdp_pc)+
geom_point(aes(color = continent))+
scale_y_continuous(labels=scales::dollar)+
labs(x = "Economic Freedom Score (0-10)",
y = "GDP per Capita")+
theme_classic(base_family = "Fira Sans Condensed",
base_size=20)
freedom_plot
The effect appears to be nonlinear.
Run a simple regression of gdp_pc
on econ_freedom
. Write out the estimated regression equation. What is the marginal effect of econ_freedom
on gdp_pc
?
##
## Call:
## lm(formula = gdp_pc ~ econ_freedom, data = freedom)
##
## Residuals:
## Min 1Q Median 3Q Max
## -24612 -10511 -3707 9727 65562
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -86400 13362 -6.466 2.85e-09 ***
## econ_freedom 14704 1935 7.599 1.06e-11 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15880 on 110 degrees of freedom
## Multiple R-squared: 0.3442, Adjusted R-squared: 0.3383
## F-statistic: 57.74 on 1 and 110 DF, p-value: 1.063e-11
Let’s try a quadratic model. Run a quadratic regression of gdp_pc
on econ_freedom
. Write out the estimated regression equation.
##
## Call:
## lm(formula = gdp_pc ~ econ_freedom + I(econ_freedom^2), data = freedom)
##
## Residuals:
## Min 1Q Median 3Q Max
## -30174 -8366 -647 2980 65158
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 280416 79511 3.527 0.000617 ***
## econ_freedom -96618 23908 -4.041 9.93e-05 ***
## I(econ_freedom^2) 8327 1783 4.669 8.67e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 14560 on 109 degrees of freedom
## Multiple R-squared: 0.4535, Adjusted R-squared: 0.4435
## F-statistic: 45.23 on 2 and 109 DF, p-value: 4.986e-15
Add the quadratic regression to your scatterplot.
What is the marginal effect of econ_freedom
on gdp_pc
?
The marginal effect can be found by taking the derivative of the regression with respect to econ_freedom
(or recalling the rule):
dYdX=β1+2β2XdGDPpcdeconfreedom=−96618+2(8327)econfreedomecon=−96618+16654econfreedom
# if you want to calculate it in R
library(broom)
freedom_reg2_tidy<-tidy(freedom_reg2)
freedom_beta_1<-freedom_reg2_tidy %>%
filter(term == "econ_freedom") %>%
pull(estimate)
freedom_beta_2<-freedom_reg2_tidy %>%
filter(term == "I(econ_freedom^2)") %>%
pull(estimate)
# freedom_beta_1+2*freedom_beta_2* # number
As a quadratic model, this relationship should predict anecon_freedom
score where gdp_pc
is at a minimum. What is that minimum Economic Freedom score, and what is the associated GDP per capita?
We can set the derivative equal to 0, or you can just remember the formula and plug in the parameters:
dYdX=β1+2β2X0=β1+2β2X−β1=2β2X−12×β1β2=econfreedom∗−12×(−96618)(8327)=econfreedom∗−12×−11.603≈econfreedom∗5.801≈econfreedom∗
## [1] 5.801793
Run a cubic model to see if we should keep going up in polynomials. Write out the estimated regression equation. Should we add a cubic term?
##
## Call:
## lm(formula = gdp_pc ~ econ_freedom + I(econ_freedom^2) + I(econ_freedom^3),
## data = freedom)
##
## Residuals:
## Min 1Q Median 3Q Max
## -30290 -8451 -493 3356 64636
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 531003.4 470172.4 1.129 0.261
## econ_freedom -211572.5 213908.4 -0.989 0.325
## I(econ_freedom^2) 25674.4 32127.4 0.799 0.426
## I(econ_freedom^3) -862.2 1594.2 -0.541 0.590
##
## Residual standard error: 14610 on 108 degrees of freedom
## Multiple R-squared: 0.455, Adjusted R-squared: 0.4399
## F-statistic: 30.05 on 3 and 108 DF, p-value: 3.318e-14
There’s no good theoretical reason why we should expect economic freedom to “change direction” twice - go down, then up, then down again - in its effect on GDP.
Statistically, we can see that ^β3 on I(econ_freedom^3)
is not significant (p-value
is 0.590
), so we should not include the cubic term.
Another way we can test for non-linearity is to run an F-test on all non-linear variables - i.e. the quadratic term and the cubic term (^β2 and ^β3) and test against the null hypothesis that: H0:^β2=^β3=0
Run this joint hypothesis test, and what can you conclude?
## Loading required package: carData
##
## Attaching package: 'car'
## The following object is masked from 'package:dplyr':
##
## recode
## The following object is masked from 'package:purrr':
##
## some
Res.Df | RSS | Df | Sum of Sq | F | Pr(>F) |
110 | 2.77e+10 | ||||
108 | 2.31e+10 | 2 | 4.69e+09 | 11 | 4.58e-05 |
The null hypothesis is that the polynomial terms (quadratic and cubic) jointly do not matter (and the relationship is therefore linear). We have sufficient evidence to reject that hypothesis (p-value
is very small). Thus, the relationship is in fact not linear.
Instead of a polynomial model, try out a logarithmic model. It is hard to interpret percent changes on an index, but it is easy to understand percent changes in GDP per capita, so run a log-linear regression. Write out the estimated regression equation. What is the marginal effect of econ_freedom
?
##
## Call:
## lm(formula = log(gdp_pc) ~ econ_freedom, data = freedom)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.1046 -0.9507 -0.0533 0.9021 3.3551
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.2953 1.0323 -0.286 0.775
## econ_freedom 1.2889 0.1495 8.621 5.53e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.227 on 110 degrees of freedom
## Multiple R-squared: 0.4032, Adjusted R-squared: 0.3978
## F-statistic: 74.32 on 1 and 110 DF, p-value: 5.525e-14
For every 1 point increase on the economic freedom index, a country’s GDP per capita increases by 1.2889×100%=128.90%
Make a scatterplot of your log-linear model with a regression line.
Put all of your results together in a regression output table with huxtable
from your answers in questions B, C, G, and H.
huxreg("GDP per Capita" = freedom_reg1,
"GDP per Capita" = freedom_reg2,
"Log(GDP per Capita)" = freedom_reg3,
coefs = c("Constant" = "(Intercept)",
"Economic Freedom Score (0-10)" = "econ_freedom",
"Economic Freedom Squared" = "I(econ_freedom^2)",
"Economic Freedom Cubed" = "I(econ_freedom^3)"),
statistics = c("N" = "nobs",
"R-Squared" = "r.squared",
"SER" = "sigma"),
number_format = 2)
GDP per Capita | GDP per Capita | Log(GDP per Capita) | |
Constant | -86400.07 *** | 280416.48 *** | 531003.39 |
(13361.77) | (79511.27) | (470172.38) | |
Economic Freedom Score (0-10) | 14704.30 *** | -96618.52 *** | -211572.48 |
(1935.13) | (23908.06) | (213908.44) | |
Economic Freedom Squared | 8326.61 *** | 25674.41 | |
(1783.32) | (32127.38) | ||
Economic Freedom Cubed | -862.15 | ||
(1594.20) | |||
N | 112 | 112 | 112 |
R-Squared | 0.34 | 0.45 | 0.45 |
SER | 15881.92 | 14564.43 | 14611.94 |
*** p < 0.001; ** p < 0.01; * p < 0.05. |