# Logistic regression

## Description

This procedure allows to analyse the relationship between one dichotomous dependent variable and one or more independent variables.

Logistic regression is a technique for analyzing problems in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).

In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc.) or 0 (FALSE, failure, non-pregnant, etc.).

The goal of logistic regression is to find the best fitting (yet biologically reasonable) model to describe the relationship between the dichotomous characteristic of interest (dependent variable = response or outcome variable) and a set of independent (predictor or explanatory) variables. Logistic regression generates the coefficients (and its standard errors and significance levels) of a formula to predict a *logit transformation* of the probability of presence of the characteristic of interest:

where p is the probability of presence of the characteristic of interest. The logit transformation is defined as the logged odds:

and

Rather than choosing parameters that minimize the sum of squared errors (like in ordinary regression), estimation in logistic regression chooses parameters that maximize the likelihood of observing the sample values.

## Required input

- Dependent variable
The variable whose values you want to predict. The dependent variable must be binary or dichotomous, and should only contain data coded as 0 or 1.

- Independent variables
Select the different variables that you

*expect*to influence the*dependent*variable. - Optionally select a filter to include a subset of cases.

### Options

Method: select the way independent variables are entered into the model.

- Enter: enter all variables in the model in one single step, without checking
- Forward: enter significant variables sequentially
- Backward: first enter all variables into the model and next remove the non-significant variables sequentially
- Stepwise: enter significant variables sequentially; after entering a variable in the model, check and possibly remove variables that became non-significant.

Enter variable if P<

A variable is entered into the model if its associated significance level is less than this P-value.

Remove variable if P>

A variable is removed from the model if its associated significance level is greater than this P-value.

Classification table cutoff value: a value between 0 and 1 which will be used as a cutoff value for a classification table. The classification table is a method to evaluate the logistic regression model. In this table the observed values for the dependent outcome and the predicted values (at the selected cut-off value) are cross-classified.

## Results

### Sample size and cases with negative and positive outcome

First the program gives sample size and the number and proportion of cases with a negative (Y=0) and positive (Y=1) outcome.

### Overall model fit

The *null model* −2 Log Likelihood is given by −2 * ln(L_{0}) where L_{0} is the likelihood of obtaining the observations if the independent variables had no effect on the outcome.

The *full model* −2 Log Likelihood is given by −2 * ln(L) where L is the likelihood of obtaining the observations with all independent variables incorporated in the model.

The difference of these two yields a Chi-Squared statistic which is a measure of how well the independent variables affect the outcome or dependent variable.

If the P-value for the overall model fit statistic is less than the conventional 0.05 then there is evidence that at least one of the independent variables contributes to the prediction of the outcome.

Cox & Snell R^{2} and Nagelkerke R^{2} are other goodness of fit measures known as pseudo R-squareds. Note that Cox & Snell's pseudo R-squared has a maximum value that is not 1. Nagelkerke R^{2} adjusts Cox & Snell's so that the range of possible values extends to 1.

### Regression coefficients

The regression coefficients are the coefficients b_{0}, b_{1}, b_{2}, ... b_{k} of the regression equation:

An independent variable with a regression coefficient not significantly different from 0 (P>0.05) can be removed from the regression model (press function key F7 to repeat the logistic regression procedure). If P<0.05 then the variable contributes significantly to the prediction of the outcome variable.

The logistic regression coefficients show the change (increase when b_{i}>0, decrease when b_{i}<0) in the predicted logged odds of having the characteristic of interest for a one-unit change in the independent variables.

When the independent variables X_{a} and X_{b} are dichotomous variables (e.g. Smoking, Sex) then the influence of these variables on the dependent variable can simply be compared by comparing their regression coefficients b_{a} and b_{b}.

The Wald statistic is the regression coefficient divided by its standard error squared: (b/SE)^{2}.

### Odds ratios with 95% CI

By taking the exponential of both sides of the regression equation as given above, the equation can be rewritten as:

It is clear that when a variable X_{i} increases by 1 unit, with all other factors remaining unchanged, then the odds will increase by a factor e^{bi}.

This factor e^{bi} is the odds ratio (O.R.) for the independent variable X_{i} and it gives the *relative* amount by which the odds of the outcome increase (O.R. greater than 1) or decrease (O.R. less than 1) when the value of the independent variable is increased by 1 units.

E.g. The variable SMOKING is coded as 0 (= no smoking) and 1 (= smoking), and the odds ratio for this variable is 2.64. This means that in the model the odds for a positive outcome in cases that do smoke are 2.64 times higher than in cases that do not smoke.

### Interpretation of the fitted equation

When the logistic regression equation is for example:

logit(p) = −8.986 + 0.251 x AGE + 0.972 x SMOKING

then for 40 years old cases who do smoke logit(p) equals 2.026. Logit(p) can be back-transformed to p by the following formula:

Alternatively, you can use the Logit table. For logit(p)=2.026 the probability p of having a positive outcome equals 0.88.

### Hosmer & Lemeshow test

The Hosmer-Lemeshow test is a statistical test for goodness of fit for the logistic regression model. The data are divided into approximately ten groups defined by increasing order of estimated risk. The observed and expected number of cases in each group is calculated and a Chi-squared statistic is calculated as follows:

with *O _{g}*,

*E*and

_{g}*n*the observed events, expected events and number of observations for the

_{g}*g*risk decile group, and

^{th}*G*the number of groups. The test statistic follows a Chi-squared distribution with

*G−2*degrees of freedom.

A large value of Chi-squared (with small p-value < 0.05) indicates poor fit and small Chi-squared values (with larger p-value closer to 1) indicate a good logistic regression model fit.

The **Contingency Table for Hosmer and Lemeshow Test** table shows the details of the test with observed and expected number of cases in each group.

### Classification table

The classification table is another method to evaluate the logistic regression model. In this table the observed values for the dependent outcome and the predicted values (at a user defined cut-off value, for example p=0.50) are cross-classified.

### ROC curve analysis

Another method to evaluate the logistic regression model makes use of ROC curve analysis. In this analysis, the power of the model's predicted values to discriminate between positive and negative cases is quantified by the Area under the ROC curve (AUC). The AUC, sometimes referred to as the c-statistic (or concordance index), is a value that varies from 0.5 (discriminating power not better than chance) to 1.0 (perfect discriminating power).

## See also

## Link

Go to Logistic regression.