Autoregressive modeling is typically considered a supervised learning task, but with a unique characteristic: the data essentially supervises itself.
In autoregressive models, we use previous values in a sequence to predict the next value.

For example, if we have a time series [1, 2, 3, 4], we might use:

  • [1] to predict [2]
  • [1, 2] to predict [3]
  • [1, 2, 3] to predict [4]

This creates natural input-output pairs where:

  • The inputs are the previous values
  • The target (output) is the next value in the sequence

So while we don’t need external labels like in traditional supervised learning, the data inherently provides its own supervision through this sequential structure. The model learns to map from past values to future values, making it fundamentally a supervised learning problem.

However, because the labels come from the data itself rather than external annotation, some researchers occasionally describe it as “self-supervised learning” – a subset of supervised learning where the supervision signal comes from the data itself.

Categorized in:

Short answers,

Last Update: 19/01/2025