Skip to main content
POST
/
v2
/
forecast
Foundational Time Series Model Multi Series
curl --request POST \
  --url https://api.nixtla.io/v2/forecast \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "series": {
    "y": [
      123
    ],
    "sizes": [
      123
    ],
    "X_future": [
      [
        123
      ]
    ],
    "X": [
      [
        123
      ]
    ]
  },
  "freq": "<string>",
  "h": 123,
  "model": "timegpt-1",
  "clean_ex_first": true,
  "level": [
    50
  ],
  "finetune_steps": 0,
  "finetune_loss": "default",
  "finetune_depth": 1,
  "finetuned_model_id": "<string>",
  "feature_contributions": false
}
'
{
  "input_tokens": 1,
  "output_tokens": 1,
  "finetune_tokens": 1,
  "mean": [
    123
  ],
  "intervals": {},
  "weights_x": [
    123
  ],
  "feature_contributions": [
    [
      123
    ]
  ]
}

Authorizations

Authorization
string
header
required

HTTPBearer

Body

application/json
series
SeriesWithFutureExogenous · object
required
freq
string
required

The frequency of the data represented as a string. 'D' for daily, 'M' for monthly, 'H' for hourly, and 'W' for weekly frequencies are available.

h
integer
required

The forecasting horizon. This represents the number of time steps into the future that the forecast should predict.

model
any
default:timegpt-1

Model to use as a string. Common options are (but not restricted to) timegpt-1 and timegpt-1-long-horizon. Full options vary by different users. Contact [email protected] for more information. We recommend using timegpt-1-long-horizon for forecasting if you want to predict more than one seasonal period given the frequency of your data.

clean_ex_first
boolean
default:true

A boolean flag that indicates whether the API should preprocess (clean) the exogenous signal before applying the large time model. If True, the exogenous signal is cleaned; if False, the exogenous variables are applied after the large time model.

level
(integer | number)[] | null

A list of values representing the prediction intervals. Each value is a percentage that indicates the level of certainty for the corresponding prediction interval. For example, [80, 90] defines 80% and 90% prediction intervals.

Minimum array length: 1
Required range: 0 <= x < 100
finetune_steps
integer
default:0

The number of tuning steps used to train the large time model on the data. Set this value to 0 for zero-shot inference, i.e., to make predictions without any further model tuning.

Required range: x >= 0
finetune_loss
enum<string>
default:default

The loss used to train the large time model on the data. Select from ['default', 'mae', 'mse', 'rmse', 'mape', 'smape']. It will only be used if finetune_steps larger than 0. Default is a robust loss function that is less sensitive to outliers.

Available options:
default,
mae,
mse,
rmse,
mape,
smape,
poisson
finetune_depth
enum<integer>
default:1

The depth of the finetuning. Uses a scale from 1 to 5, where 1 means little finetuning, and 5 means that the entire model is finetuned. By default, the value is set to 1.

Available options:
1,
2,
3,
4,
5
finetuned_model_id
string | null

ID of previously finetuned model

feature_contributions
boolean
default:false

Compute the exogenous features contributions to the forecast.

Response

Successful Response

input_tokens
integer
required
Required range: x >= 0
output_tokens
integer
required
Required range: x >= 0
finetune_tokens
integer
required
Required range: x >= 0
mean
number[]
required
intervals
Intervals · object
weights_x
number[] | null
feature_contributions
number[][] | null