You are on page 1of 1

Online vs offline learning?

Online learning means that you are doing it as the data comes in...offline means that you have a static dataset.

In off-line learning, all the data are stored and can be accessed repeatedly. Batch learning is always off-line. In on-line learning, each case is discarded after it is processed and the weights are updated. Online training is always incremental. With off-line learning, you can compute the objective function for any fixed set of weights, so you can see whether you are making progess in training. You can compute a minimum of the objective function to any desired precision. You can use a variety of algorithms for avoiding bad local minima, such as multiple random initializations or global optimization algorithms. You can compute the error function on a validation set and use early stopping or choose from different networks to improve generalization. You can use cross-validation and bootstrapping to estimate generalization error. You can compute prediction and confidence intervals (error bars). With online learning you can do none of these things because you cannot compute the objective function on the training set or the error function on the validation set for a fixed set of weights, since these data sets are not stored. Hence, on-line learning is generally more difficult and unreliable than off-line learning. Off-line incremental learning does not have all these problems of on-line learning, which is why it is important to distinguish between the concepts of on-line and incremental learning. Some of the theoretical difficulties of on-line learning are alleviated when the assumptions of stochastic learning hold (see below) and training is assumed to proceed indefinitely.

You might also like