Fundamentals of Statistics contains material of various lectures and courses of H. Lohninger on statistics, data analysis and chemometrics......click here for more.


Time Series - Neural Network Models

When dealing with neural networks, the model-finding process is very similar to that for ARIMA models. The three phases,
 

  • model selection,
  • parameter estimation, and
  • performance checking,


can also be distinguished, but usually the terminology is quite different. Moreover, the heuristics guiding the model selection process are not as detailed as for ARIMA models. This is partly due to the non-linear mapping produced by the neural networks. A thorough analysis of the time series, involving at least a trend analysis, checking seasonal patterns, and checking the autocorrelation is definitely advisable, because it reveals relevant properties characterizing the data. When using a standard feed-forward neural network, the number of layers and units can be selected. Then, various properties of the training algorithm may be altered. In general, the target is to find small models, because they have fewer degrees of freedom. Therefore, they require less training data for obtaining reliable results. Huge amounts of data are needed for large neural networks. The hidden layers should be small enough to allow generalization, and large enough to produce the required mapping.

Window Networks:

Since neural networks have not been developed for handling sequences of inputs, either the input has to be pre-processed or the model has to be adapted to temporal tasks. Pre-processing is the easier of the two strategies. It turns a sequence of time-series elements into a single input. This can, for instance, be achieved by sliding a so-called "time window" over the sequence. The following figure demonstrates how this works:

Window Network


 

The input consists of the preceding sequence elements. The neural network is trained to forecast the next sequence element. (Annotation: this resembles an AR-model from the class of ARIMA models, where the window size is equal to the order of the AR-model). Using a time window has the advantage that standard neural networks can be used. Therefore, they can be simulated with standard neural network modeling tools.

Time Delay Neural Networks (TDNN):

A large number of sophisticated neural network models tailored towards processing sequences of inputs have been developed. They are all equipped with some kind of "memory" keeping information over time. Neural networks without such memories "forget" each input after mapping it to the output. In time delay networks , the connections have time delays of different length. Such a time delay postpones the forwarding of a unit's activation to another unit. When using multiple time delays, these networks can be trained to deal with a sequence of past time series elements. At each time step, a single sequence element is fed into the input, but the network's forecast takes preceding sequence elements into account. While several time delays at the input form a time window, another set of time delays at the hidden layer level duplicates the effect. In the network described by Wan (), the weights of delayed connections are not trained separately. This keeps the number of degrees of freedom low. The resulting time delay networks are powerful tools for handling sequential input.

Recurrent Networks:

While window networks and time delay networks are non-recurrent nets, networks with feedback loops belong to the group of recurrent networks. There, unit activations are not only delayed while being fed forward through the network, but unit activations are also delayed and fed back to preceding layers. By this procedure, information can cycle in the network. At least theoretically, this allows an unlimited number of past activations to be taken into account.