Close
SciStat
 

Logistic regression

Description

This procedure allows to analyse the relationship between one dichotomous dependent variable and one or more independent variables.

Logistic regression is a technique for analyzing problems in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).

In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc.) or 0 (FALSE, failure, non-pregnant, etc.).

The goal of logistic regression is to find the best fitting (yet biologically reasonable) model to describe the relationship between the dichotomous characteristic of interest (dependent variable = response or outcome variable) and a set of independent (predictor or explanatory) variables. Logistic regression generates the coefficients (and its standard errors and significance levels) of a formula to predict a logit transformation of the probability of presence of the characteristic of interest:

Logistic regression

where p is the probability of presence of the characteristic of interest. The logit transformation is defined as the logged odds:

Logistic regression

and

Logistic regression

Rather than choosing parameters that minimize the sum of squared errors (like in ordinary regression), estimation in logistic regression chooses parameters that maximize the likelihood of observing the sample values.

Required input

  • Dependent variable

    The variable whose values you want to predict. The dependent variable must be binary or dichotomous, and should only contain data coded as 0 or 1.

  • Independent variables

    Select the different variables that you expect to influence the dependent variable.

  • Optionally select a filter to include a subset of cases.

Options

Method: select the way independent variables are entered into the model.

  • Enter: enter all variables in the model in one single step, without checking
  • Forward: enter significant variables sequentially
  • Backward: first enter all variables into the model and next remove the non-significant variables sequentially
  • Stepwise: enter significant variables sequentially; after entering a variable in the model, check and possibly remove variables that became non-significant.

Enter variable if P<

A variable is entered into the model if its associated significance level is less than this P-value.

Remove variable if P>

A variable is removed from the model if its associated significance level is greater than this P-value.

Classification table cutoff value: a value between 0 and 1 which will be used as a cutoff value for a classification table. The classification table is a method to evaluate the logistic regression model. In this table the observed values for the dependent outcome and the predicted values (at the selected cut-off value) are cross-classified.

Results

Sample size and cases with negative and positive outcome

First the program gives sample size and the number and proportion of cases with a negative (Y=0) and positive (Y=1) outcome.

Overall model fit

The null model −2 Log Likelihood is given by −2 * ln(L0) where L0 is the likelihood of obtaining the observations if the independent variables had no effect on the outcome.

The full model −2 Log Likelihood is given by −2 * ln(L) where L is the likelihood of obtaining the observations with all independent variables incorporated in the model.

The difference of these two yields a Chi-Squared statistic which is a measure of how well the independent variables affect the outcome or dependent variable.

If the P-value for the overall model fit statistic is less than the conventional 0.05 then there is evidence that at least one of the independent variables contributes to the prediction of the outcome.

Cox & Snell R2 and Nagelkerke R2 are other goodness of fit measures known as pseudo R-squareds. Note that Cox & Snell's pseudo R-squared has a maximum value that is not 1. Nagelkerke R2 adjusts Cox & Snell's so that the range of possible values extends to 1.

Regression coefficients

The regression coefficients are the coefficients b0, b1, b2, ... bk of the regression equation:

Logistic regression equation

An independent variable with a regression coefficient not significantly different from 0 (P>0.05) can be removed from the regression model (press function key F7 to repeat the logistic regression procedure). If P<0.05 then the variable contributes significantly to the prediction of the outcome variable.

The logistic regression coefficients show the change (increase when bi>0, decrease when bi<0) in the predicted logged odds of having the characteristic of interest for a one-unit change in the independent variables.

When the independent variables Xa and Xb are dichotomous variables (e.g. Smoking, Sex) then the influence of these variables on the dependent variable can simply be compared by comparing their regression coefficients ba and bb.

The Wald statistic is the regression coefficient divided by its standard error squared: (b/SE)2.

Odds ratios with 95% CI

By taking the exponential of both sides of the regression equation as given above, the equation can be rewritten as:

Logistic regression equation

It is clear that when a variable Xi increases by 1 unit, with all other factors remaining unchanged, then the odds will increase by a factor ebi.

Logistic regression equation - increase of odds

This factor ebi is the odds ratio (O.R.) for the independent variable Xi and it gives the relative amount by which the odds of the outcome increase (O.R. greater than 1) or decrease (O.R. less than 1) when the value of the independent variable is increased by 1 units.

E.g. The variable SMOKING is coded as 0 (= no smoking) and 1 (= smoking), and the odds ratio for this variable is 2.64. This means that in the model the odds for a positive outcome in cases that do smoke are 2.64 times higher than in cases that do not smoke.

Interpretation of the fitted equation

When the logistic regression equation is for example:

logit(p) = −8.986 + 0.251 x AGE + 0.972 x SMOKING

then for 40 years old cases who do smoke logit(p) equals 2.026. Logit(p) can be back-transformed to p by the following formula:

Logit back-transformation

Alternatively, you can use the Logit table. For logit(p)=2.026 the probability p of having a positive outcome equals 0.88.

Hosmer & Lemeshow test

The Hosmer-Lemeshow test is a statistical test for goodness of fit for the logistic regression model. The data are divided into approximately ten groups defined by increasing order of estimated risk. The observed and expected number of cases in each group is calculated and a Chi-squared statistic is calculated as follows:

Hosmer & Lemeshow equation

with Og, Eg and ng the observed events, expected events and number of observations for the gth risk decile group, and G the number of groups. The test statistic follows a Chi-squared distribution with G−2 degrees of freedom.

A large value of Chi-squared (with small p-value < 0.05) indicates poor fit and small Chi-squared values (with larger p-value closer to 1) indicate a good logistic regression model fit.

The Contingency Table for Hosmer and Lemeshow Test table shows the details of the test with observed and expected number of cases in each group.

Classification table

The classification table is another method to evaluate the logistic regression model. In this table the observed values for the dependent outcome and the predicted values (at a user defined cut-off value, for example p=0.50) are cross-classified.

ROC curve analysis

Another method to evaluate the logistic regression model makes use of ROC curve analysis. In this analysis, the power of the model's predicted values to discriminate between positive and negative cases is quantified by the Area under the ROC curve (AUC). The AUC, sometimes referred to as the c-statistic (or concordance index), is a value that varies from 0.5 (discriminating power not better than chance) to 1.0 (perfect discriminating power).

See also

Link

Go to Logistic regression.