Perform Cross Validation for multiple series
HTTPBearer
The frequency of the data represented as a string. 'D' for daily, 'M' for monthly, 'H' for hourly, and 'W' for weekly frequencies are available.
Number of windows to evaluate.
The forecasting horizon. This represents the number of time steps into the future that the forecast should predict.
Model to use as a string. Common options are (but not restricted to) timegpt-1 and timegpt-1-long-horizon. Full options vary by different users. Contact [email protected] for more information. We recommend using timegpt-1-long-horizon for forecasting if you want to predict more than one seasonal period given the frequency of your data.
A boolean flag that indicates whether the API should preprocess (clean) the exogenous signal before applying the large time model. If True, the exogenous signal is cleaned; if False, the exogenous variables are applied after the large time model.
A list of values representing the prediction intervals. Each value is a percentage that indicates the level of certainty for the corresponding prediction interval. For example, [80, 90] defines 80% and 90% prediction intervals.
10 <= x < 100The number of tuning steps used to train the large time model on the data. Set this value to 0 for zero-shot inference, i.e., to make predictions without any further model tuning.
x >= 0The loss used to train the large time model on the data. Select from ['default', 'mae', 'mse', 'rmse', 'mape', 'smape']. It will only be used if finetune_steps larger than 0. Default is a robust loss function that is less sensitive to outliers.
default, mae, mse, rmse, mape, smape, poisson The depth of the finetuning. Uses a scale from 1 to 5, where 1 means little finetuning, and 5 means that the entire model is finetuned. By default, the value is set to 1.
1, 2, 3, 4, 5 ID of previously finetuned model
Step size between each cross validation window. If None it will be equal to the forecasting horizon.
x > 0Zero-based indices of the exogenous features to treat as historical.
x >= 0Fine-tune the model in each window. If False, only fine-tunes on the first window. Only used if finetune_steps > 0.