A linear regression model assumes that the mean outcome, y, conditional on observable predictors, X, is linear in parameters b, written E(y|X) = Xb, in which case the parameters b can be solved for directly. Nonlinear regression models can be difficult to estimate and fragile (less robust, meaning apparent solutions that are incorrect can occur with greater chance). A generalized linear model is a robust and easily estimated version of a nonlinear regression model that assumes instead that some function of the conditional expectation in linear in b, written E(y|X) = g(Xb), where g() is the inverse link function.
For example, a log link assumes that the log of E(y|X) is Xb, so that E(y|X]) is exp(Xb), with the exponential function being the inverse link function. A generalized linear model with a log link is best for nonnegative outcomes, since the conditional expectation is never zero (though the outcome can be zero or even negative). These models encompass many other types of models, including Poisson, logit, fractional logit, and probit models.