How to Use Regression Analysis Effectively

A Guide to Avoiding the Common Pitfalls of Regression Modeling

So, you want to use regression analysis in your paper? While statistical modeling can add great authority to your paper and to the conclusions you draw, it is also easy to use incorrectly.

The worst case scenario can occur when you think you’ve done everything right and therefore reach a strong conclusion based on an improperly conceived model. This guide presents a series of suggestions and considerations that you should take into account before you decide to use regression analysis in your paper.

The best regression model is based on a strong theoretical foundation that demonstrates not just that A and B are related, but why A and B are related.

How to Use Regression

Before you start, ask yourself two important questions: is your research question a good fit for regression analysis? And, do you have access to good data?

1. Is Your Research Question a Good Fit for Regression Analysis?

This depends on many different factors. Are you trying to explain something that is primarily described by numerical values? This is a key question to ask yourself before you decide to use regression. Although there are various ways to use regression analysis to describe non-numerical outcomes (e.g., dichotomous yes/no or probabilistic outcomes), they become more complicated and you will need to have a much deeper understanding of the underlying principles of regression in order to use them effectively.

Before you start, consider whether or not your dependent variable is numerical. Some examples:

  • Number of years a politician serves in Senate
  • Life expectancy
  • Lifetime earnings
  • Age at birth of first child

At the same time, you need to make sure that there is sufficient variation in your dependent variable and that the variation occurs in a normal pattern.

For example, you would have a problem if you tried to predict the likelihood of someone being elected as president because almost no one is elected as president. As a result, there is virtually no variation on the dependent variable.

2. Do You Have Access to Good Data?

Before you can conduct any type of analysis, you need a good data set. Not all data sets are easily suited to regression analysis without considerable manipulation.

Some things to consider before you decide to use regression:

  • Are most of your independent variables numerical in nature? The best data set for regression will have variables that are primarily described by numbers that vary on a continuous scale. On the other hand, if most of your variables are categorical, you might consider using a different method of analysis (e.g., Chi-squared).
  • Are there enough cases (n) in your data set? Particularly if you think you might use multiple regression, where multiple independent variables are used to predict a single dependent variable, you need to have a sufficient number of cases in your sample to obtain significant results. A general rule of thumb is that you need at least 20 cases per independent variable in your model. So if your model includes 5 independent variables, you need a minimum of 100 cases.

Keep in mind that your independent variables need to meet the same criteria for normality and variability as your dependent variable.

 

Once you decide to proceed with a regression model in your analysis, there are a three key concepts to keep in mind as you design your model to avoid making an easily preventable mistake that could send your conclusions way off track.

  • Parsimony
  • Internal Validity
  • Multicollinearity

Each is described in more detail below.

Parsimony

In statistics, the principle of parsimony is based on the idea that when possible, the simplest model with the fewest independent variables should be used when a model with more variables offers only slightly more explanatory value. In other words, one should not add variables to a model that do not increase the ability of the model to explain something.

Only add variables to a model if they significantly increase the ability of the model to explain something.

If you add too many variables to your model, you can unwittingly introduce major problems to your analysis.

In the extreme case, you must consider that your R2 value will always increase with the addition of new variables: so if you examine R2 alone, you can be duped into thinking that you have a great model simply by dumping in more and more predictor variables.

There are two good ways to address this problem: use an Adjusted R2 to compare models with different numbers of predictors, and use stepwise regression to analyze the explanatory impact of each variable as it is added to the model.

  • Adjusted R2 takes into consideration the number of variables used in the model, and only increases when the addition of a new variable explains more than would random chance alone. So although a model with 10 variables might have a very high R2 value, the Adjusted R2 could actually be much lower than a model with fewer variables. Selecting your model based on Adjusted R2 helps you select a more parsimonious model that is less likely to have other problems (e.g., see multicollinearity below).
  • Stepwise Regression is a computational method of assessing the additional explanatory value of each variable as they are added to the model in different orders. It can be used to parse out superfluous variables from a model, however it needs to be used carefully and in concert with theoretical guidance to avoid overfitting your data.

A good rule of thumb as you consider different models is that you should always have a good reason to add a predictor variable to your model, and if you can’t come up with a good theoretical explanation as to why A influences B, then leave out A!

Internal Validity

Internal validity is the degree to which one factor can be said to cause another factor based on three basic criteria:

  1. Temporal precedence, i.e., the “cause” precedes the “effect.”
  2. Covariation, i.e., the “cause” and “effect” are demonstrably related.
  3. Nonspuriousness, i.e., there are no plausible alternative explanations for the observed covariation caused by a confounding variable.

In many cases, internal validity becomes an issue in the form of a “chicken and egg” problem.

For example, let’s say you are considering the relationship between obesity and depression (a common example). If you want to include depression as an independent variable to explain obesity in your model, you first need to consider the question:

Does depression lead to obesity, or does obesity lead to depression?

If you have no clear theoretical guidance to show that, in fact, depression usually precedes obesity (temporal precedence), you could introduce a significant problem to your model if the relationship is in fact the other way around: depression being the result of obesity.

Therefore, as you craft your model it is important to have a theoretical basis for the inclusion of each variable.

Multicollinearity

Multicollinearity occurs when the independent variables in a multiple regression model are highly correlated with one another. This can be a problem in several ways:

  • It reduces the parsimony of your model if the two variables are highly similar (e.g., two different variables that effectively measure the same thing);
  • Multicollinearity can lead to erratic changes in the coefficients (measured effect) of predictor variables;
  • As a result, it can be difficult to interpret the results of a model with high multicollinearity among predictors. Specifically, it becomes impossible to discern the individual effect of different regressors.

An example of variables that are going to be highly multicollinear are any variables that effectively measure the same thing. One way to show this, for the purposes of an example, is to imagine converting categorical data into a series of binary variables.

Any variables that effectively measure the same concept are likely to have high collinearity.

For example, let’s say that we have a variable measuring memory where respondents are able to choose very good, average, or poor as a response.

One way to use this data in a regression model would be to convert the data into three dichotomous (yes/no) variables indicating a person’s response.

However, if you then include all of these dichotomous variables in your model, you will have a big problem because they will become perfectly multicollinear. This is because anyone who indicated that they had a very good memory, by default, also indicated that they do not have a poor memory. The two variables measure the same thing: a person’s memory.

Another common example can be found in the use of height and weight variables. Although the two variables measure different things, broadly speaking they can both be said to measure a person’s body size, and they will almost always be highly correlated.

As a result, if both variables are included as predictors in a model, it can be difficult to discern the effect that each variable has individually on the outcome (measured by the coefficient).

Thus, as you build your model, you need to be aware of the potentially confounding impact of using highly similar predictor variables. In an ideal model, all independent variables will have no or very low correlation to each other, but a high correlation with the dependent variable.

 

Conclusion: Use Regression Effectively by Keeping it Simple

Regression analysis can be a powerful explanatory tool and a highly persuasive way of demonstrating relationships between complex phenomena, but it is also easy to misuse if you are not an expert statistician.

If you decide to use regression analysis, you shouldn’t ask it to do too much: don’t force your data to explain something that you otherwise can’t explain!

Moreover, regression should only be used where it is appropriate and when their is sufficient quantity and quality of data to give the analysis meaning beyond your sample. If you can’t generalize beyond your sample, you really haven’t explained anything at all.

Lastly, always keep in mind that the best regression model is based on a strong theoretical foundation that demonstrates not just that A and B are related, but why A and B are related.

If you keep all of these things in mind, you will be on your way to crafting a powerful and persuasive argument.

Comments & Discussion