autots.models package

Submodules

autots.models.arch module

Arch Models from arch package.

class autots.models.arch.ARCH(name: str = 'ARCH', frequency: str = 'infer', prediction_interval: float = 0.9, mean: str = 'Constant', lags: int = 2, vol: str = 'GARCH', p: int = 1, o: int = 0, q: int = 1, power: float = 2.0, dist: str = 'normal', rescale: bool = False, maxiter: int = 200, simulations: int = 1000, regression_type: str | None = None, return_result_windows: bool = False, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

ARCH model family from arch package. See arch package for arg details. Not to be confused with a linux distro.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.base module

Base model information

@author: Colin

class autots.models.base.ModelObject(name: str = 'Uninitiated Model Name', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, fit_runtime=datetime.timedelta(0), holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int = -1)

Bases: object

Generic class for holding forecasting models.

Models should all have methods:

.fit(df, future_regressor = []) (taking a DataFrame with DatetimeIndex and n columns of n timeseries) .predict(forecast_length = int, future_regressor = [], just_point_forecast = False) .get_new_params() - return a dictionary of weighted random selected parameters

Parameters:
  • name (str) – Model Name

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • n_jobs (int) – used by some models that parallelize to multiple cores

basic_profile(df)

Capture basic training details.

create_forecast_index(forecast_length: int, last_date=None)

Generate a pd.DatetimeIndex appropriate for a new forecast.

Warning

Requires ModelObject.basic_profile() being called as part of .fit()

fit_data(df, future_regressor=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

static time()
class autots.models.base.PredictionObject(model_name: str = 'Uninitiated', forecast_length: int = 0, forecast_index=nan, forecast_columns=nan, lower_forecast=nan, forecast=nan, upper_forecast=nan, prediction_interval: float = 0.9, predict_runtime=datetime.timedelta(0), fit_runtime=datetime.timedelta(0), model_parameters={}, transformation_parameters={}, transformation_runtime=datetime.timedelta(0), per_series_metrics=nan, per_timestamp=nan, avg_metrics=nan, avg_metrics_weighted=nan, full_mae_error=None, model=None, transformer=None)

Bases: object

Generic class for holding forecast information.

model_name
model_parameters
transformation_parameters
forecast
upper_forecast
lower_forecast
long_form_results()

return complete results in long form

total_runtime()

return runtime for all model components in seconds

plot()
evaluate()
apply_constraints()
apply_constraints(constraints=None, df_train=None, constraint_method=None, constraint_regularization=None, upper_constraint=None, lower_constraint=None, bounds=True)

Use constraint thresholds to adjust outputs by limit.

Example

apply_constraints(
constraints=[
{ # don’t exceed historic max

“constraint_method”: “quantile”, “constraint_value”: 1.0, “constraint_direction”: “upper”, “constraint_regularization”: 1.0, “bounds”: True,

}, { # don’t exceed 2% decline by end of forecast horizon

“constraint_method”: “slope”, “constraint_value”: {

“slope”: -0.02, “window”: 28, “window_agg”: “min”, “threshold”: -0.01,

}, “constraint_direction”: “lower”, “constraint_regularization”: 0.9, “bounds”: False,

}, { # don’t exceed 2% growth by end of forecast horizon

“constraint_method”: “slope”, “constraint_value”: {“slope”: 0.02, “window”: 10, “window_agg”: “max”, “threshold”: 0.01}, “constraint_direction”: “upper”, “constraint_regularization”: 0.9, “bounds”: False,

}, { # don’t go below the last 10 values - 10%

“constraint_method”: “last_window”, “constraint_value”: {“window”: 10, “threshold”: -0.1}, “constraint_direction”: “lower”, “constraint_regularization”: 1.0, “bounds”: False,

}, { # don’t go below zero

“constraint_method”: “absolute”, “constraint_value”: 0, # can also be an array or Series “constraint_direction”: “lower”, “constraint_regularization”: 1.0, “bounds”: True,

}, { # don’t go below historic min - 1 st dev

“constraint_method”: “stdev_min”, “constraint_value”: 1.0, “constraint_direction”: “lower”, “constraint_regularization”: 1.0, “bounds”: True,

}, { # don’t go above historic mean + 3 st devs, soft limit

“constraint_method”: “stdev”, “constraint_value”: 3.0, “constraint_direction”: “upper”, “constraint_regularization”: 0.5, “bounds”: True,

}, { # round decimals to 2 places

“constraint_method”: “round”, “constraint_value”: 2,

}, { # apply dampening (gradually flatten out forecast)

“constraint_method”: “dampening”, “constraint_value”: 0.98,

},

]

)

Parameters:
  • constraint_method (str) – one of stdev_min - threshold is min and max of historic data +/- constraint * st dev of data stdev - threshold is the mean of historic data +/- constraint * st dev of data absolute - input is array of length series containing the threshold’s final value for each quantile - constraint is the quantile of historic data to use as threshold

  • constraint_regularization (float) – 0 to 1 where 0 means no constraint, 1 is hard threshold cutoff, and in between is penalty term

  • upper_constraint (float) – or array, depending on method, None if unused

  • lower_constraint (float) – or array, depending on method, None if unused

  • bounds (bool) – if True, apply to upper/lower forecast, otherwise False applies only to forecast

  • df_train (pd.DataFrame) – required for quantile/stdev methods to find threshold values

Returns:

self

evaluate(actual, series_weights: dict | None = None, df_train=None, per_timestamp_errors: bool = False, full_mae_error: bool = True, scaler=None, cumsum_A=None, diff_A=None, last_of_array=None, column_names=None, custom_metric=None)

Evalute prediction against test actual. Fills out attributes of base object.

This fails with pd.NA values supplied.

Parameters:
  • actual (pd.DataFrame) – dataframe of actual values of (forecast length * n series)

  • series_weights (dict) – key = column/series_id, value = weight

  • df_train (pd.DataFrame) – historical values of series, wide, used for setting scaler for SPL necessary for MADE and Contour if forecast_length == 1 if None, actuals are used instead (suboptimal).

  • per_timestamp (bool) – whether to calculate and return per timestamp direction errors

  • metric (custom) – a function to generate a custom metric. Expects func(A, F, df_train, prediction_interval) where the first three are np arrays of wide style 2d.

Returns:

contains a column for each series containing accuracy metrics per_timestamp (pandas.DataFrame): smape accuracy for each timestamp, avg of all series avg_metrics (pandas.Series): average values of accuracy across all input series avg_metrics_weighted (pandas.Series): average values of accuracy across all input series weighted by series_weight, if given full_mae_errors (numpy.array): abs(actual - forecast) scaler (numpy.array): precomputed scaler for efficiency, avg value of series in order of columns

Return type:

per_series_metrics (pandas.DataFrame)

extract_ensemble_runtimes()

Return a dataframe of final runtimes per model for standard ensembles.

long_form_results(id_name='SeriesID', value_name='Value', interval_name='PredictionInterval', update_datetime_name=None, datetime_column=None)

Export forecasts (including upper and lower) as single ‘long’ format output

Parameters:
  • id_name (str) – name of column containing ids

  • value_name (str) – name of column containing numeric values

  • interval_name (str) – name of column telling you what is upper/lower

  • datetime_column (str) – if None, is index, otherwise, name of column for datetime

  • update_datetime_name (str) – if not None, adds column with current timestamp and this name

Returns:

pd.DataFrame

plot(df_wide=None, series: str | None = None, remove_zeroes: bool = False, interpolate: str | None = None, start_date: str = 'auto', alpha=0.3, facecolor='black', loc='upper right', title=None, title_substring=None, vline=None, colors=None, include_bounds=True, **kwargs)

Generate an example plot of one series. Does not handle non-numeric forecasts.

Parameters:
  • df_wide (str) – historic data for plotting actuals

  • series (str) – column name of series to plot. Random if None.

  • ax – matplotlib axes to pass through to pd.plot()

  • remove_zeroes (bool) – if True, don’t plot any zeroes

  • interpolate (str) – if not None, a method to pass to pandas interpolate

  • start_date (str) – Y-m-d string or Timestamp to remove all data before

  • vline (datetime) – datetime of dashed vertical line to plot

  • colors (dict) – colors mapping dictionary col: color

  • alpha (float) – intensity of bound interval shading

  • title (str) – title

  • title_substring (str) – additional title details to pass to existing, moves series name to axis

  • include_bounds (bool) – if True, shows region of upper and lower forecasts

  • pd.DataFrame.plot() (**kwargs passed to) –

plot_df(df_wide=None, series: str | None = None, remove_zeroes: bool = False, interpolate: str | None = None, start_date: str | None = None)
plot_ensemble_runtimes(xlim_right=None)

Plot ensemble runtimes by model type.

plot_grid(df_wide=None, start_date='auto', interpolate=None, remove_zeroes=False, figsize=(24, 18), title='AutoTS Forecasts', cols=None, series=None, colors=None, include_bounds=True)

Plots multiple series in a grid, if present. Mostly identical args to the single plot function.

total_runtime()

Combine runtimes.

autots.models.base.apply_constraints(forecast, lower_forecast, upper_forecast, constraints=None, df_train=None, constraint_method=None, constraint_regularization=None, upper_constraint=None, lower_constraint=None, bounds=True)

Use constraint thresholds to adjust outputs by limit.

Parameters:
  • forecast (pd.DataFrame) – forecast df, wide style

  • lower_forecast (pd.DataFrame) – lower bound forecast df if bounds is False, upper and lower forecast dataframes are unused and can be empty

  • upper_forecast (pd.DataFrame) – upper bound forecast df

  • constraints (list) – list of dictionaries of constraints to apply keys: “constraint_method” (same as below, old args), “constraint_regularization”, “constraint_value”, “constraint_direction” (upper/lower), bounds

  • df_train (pd.DataFrame) – required for quantile/stdev methods to find threshold values

  • args (# old) –

  • constraint_method (str) – one of stdev_min - threshold is min and max of historic data +/- constraint * st dev of data stdev - threshold is the mean of historic data +/- constraint * st dev of data absolute - input is array of length series containing the threshold’s final value for each quantile - constraint is the quantile of historic data to use as threshold last_window - certain percentage above and below the last n data values slope - cannot exceed a certain growth rate from last historical value

  • constraint_regularization (float) – 0 to 1 where 0 means no constraint, 1 is hard threshold cutoff, and in between is penalty term

  • upper_constraint (float) – or array, depending on method, None if unused

  • lower_constraint (float) – or array, depending on method, None if unused

  • bounds (bool) – if True, apply to upper/lower forecast, otherwise False applies only to forecast

Returns:

forecast, lower, upper (pd.DataFrame)

autots.models.base.calculate_peak_density(model, data, group_col='Model', y_col='TotalRuntimeSeconds')
autots.models.base.create_forecast_index(frequency, forecast_length, train_last_date, last_date=None)
autots.models.base.create_seaborn_palette_from_cmap(cmap_name='gist_rainbow', n=10)
autots.models.base.extract_single_series_from_horz(series, model_name, model_parameters)
autots.models.base.extract_single_transformer(series, model_name, model_parameters, transformation_params)
autots.models.base.plot_distributions(runtimes_data, group_col='Model', y_col='TotalRuntimeSeconds', xlim=None, xlim_right=None, title_suffix='')

autots.models.basics module

Naives and Others Requiring No Additional Packages Beyond Numpy and Pandas

class autots.models.basics.AverageValueNaive(name: str = 'AverageValueNaive', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, method: str = 'median', window: int | None = None, **kwargs)

Bases: ModelObject

Naive forecasting predicting a dataframe of the series’ median values

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.BallTreeMultivariateMotif(frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int = 1, window: int = 5, point_method: str = 'mean', distance_metric: str = 'canberra', k: int = 10, sample_fraction=None, **kwargs)

Bases: ModelObject

Forecasts using a nearest neighbors type model adapted for probabilistic time series. Many of these motifs will struggle when the forecast_length is large and history is short.

Parameters:
  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • n_jobs (int) – how many parallel processes to run

  • random_seed (int) – used in selecting windows if max_windows is less than total available

  • window (int) – length of forecast history to match on

  • point_method (int) – how to summarize the nearest neighbors to generate the point forecast “weighted_mean”, “mean”, “median”, “midhinge”

  • distance_metric (str) – all valid values for scipy cdist

  • k (int) – number of closest neighbors to consider

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.BasicLinearModel(name: str = 'BasicLinearModel', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2024, verbose: int = 0, regression_type: str | None = None, datepart_method: str = 'common_fourier', changepoint_spacing: int | None = None, changepoint_distance_end: int | None = None, lambda_: float = 0.01, trend_phi: float | None = None, holiday_countries_used: bool = True, **kwargs)

Bases: ModelObject

Ridge regression of seasonal + trend changepoint + constant + regressor. Like a minimal version of Prophet or Cassandra.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – “User” or None. If used, will be as covariate. The ratio of num_series:num_regressor_series will largely determine the impact

  • window (int) – length of forecast history to match on

  • point_method (int) – how to summarize the nearest neighbors to generate the point forecast “weighted_mean”, “mean”, “median”, “midhinge”

  • distance_metric (str) – all valid values for scipy cdist + “nan_euclidean” from sklearn

  • include_differenced (bool) – True to have the distance metric result be an average of the distance on absolute values as well as differenced values

  • k (int) – number of closest neighbors to consider

  • stride_size (int) – how many obs to skip between each new window. Higher numbers will reduce the number of matching windows and make the model faster.

base_scaler(df)
coefficient_summary(df)

Used in profiler.

create_x(df, future_regressor=None, holiday_country='US', holiday_countries_used=True)
fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

process_components()
return_components(df)
scale_data(df)
to_origin_space(df, trans_method='forecast', components=False, bounds=False)

Take transformed outputs back to original feature space.

class autots.models.basics.ConstantNaive(name: str = 'ConstantNaive', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, constant: float = 0, **kwargs)

Bases: ModelObject

Naive forecasting predicting a dataframe of zeroes (0’s)

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • constant (float) – value to fill with

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.FFT(name: str = 'FFT', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2023, verbose: int = 0, n_harmonics: int = 10, detrend: str = 'linear', **kwargs)

Bases: ModelObject

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • regressor (numpy.Array) – additional regressor

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.KalmanStateSpace(name: str = 'KalmanStateSpace', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, state_transition=[[1, 1], [0, 1]], process_noise=[[0.1, 0.0], [0.0, 0.01]], observation_model=[[1, 0]], observation_noise: float = 1.0, em_iter: int = 10, model_name: str = 'undefined', forecast_length: int | None = None, subset=None, **kwargs)

Bases: ModelObject

Forecast using a state space model solved by a Kalman Filter.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • subset (int) – if not None, forecasts in chunks of this size. Reduces memory at the expense of compute time.

cost_function(param, df)
fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

fit_data(df, future_regressor=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

tune_observational_noise(df)
class autots.models.basics.LastValueNaive(name: str = 'LastValueNaive', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, **kwargs)

Bases: ModelObject

Naive forecasting predicting a dataframe of the last series value

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.MetricMotif(name: str = 'MetricMotif', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, regression_type: str | None = None, comparison_transformation: dict | None = None, combination_transformation: dict | None = None, window: int = 5, point_method: str = 'mean', distance_metric: str = 'mae', k: int = 10, **kwargs)

Bases: ModelObject

Forecasts using a nearest neighbors type model adapted for probabilistic time series. This version is fully vectorized, using basic metrics for distance comparison.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • window (int) – length of forecast history to match on

  • point_method (int) – how to summarize the nearest neighbors to generate the point forecast “weighted_mean”, “mean”, “median”, “midhinge”

  • distance_metric (str) – mae, mqae, mse

  • k (int) – number of closest neighbors to consider

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • regressor (numpy.Array) – additional regressor

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.Motif(name: str = 'Motif', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int = 1, window: int = 5, point_method: str = 'weighted_mean', distance_metric: str = 'minkowski', k: int = 10, max_windows: int = 5000, multivariate: bool = False, return_result_windows: bool = False, **kwargs)

Bases: ModelObject

Forecasts using a nearest neighbors type model adapted for probabilistic time series.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • n_jobs (int) – how many parallel processes to run

  • random_seed (int) – used in selecting windows if max_windows is less than total available

  • window (int) – length of forecast history to match on

  • point_method (int) – how to summarize the nearest neighbors to generate the point forecast “weighted_mean”, “mean”, “median”, “midhinge”

  • distance_metric (str) – all valid values for scipy cdist

  • k (int) – number of closest neighbors to consider

  • max_windows (int) – max number of windows to consider (a speed/accuracy tradeoff)

  • multivariate (bool) – if True, utilizes matches from all provided series for each series forecast. Else just own history of series.

  • return_result_windows (bool) – if True, result windows (all motifs gathered for forecast) will be saved in dict to result_windows attribute

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.MotifSimulation(name: str = 'MotifSimulation', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, phrase_len: str = '5', comparison: str = 'magnitude_pct_change_sign', shared: bool = False, distance_metric: str = 'l2', max_motifs: float = 50, recency_weighting: float = 0.1, cutoff_threshold: float = 0.9, cutoff_minimum: int = 20, point_method: str = 'median', n_jobs: int = -1, verbose: int = 1, **kwargs)

Bases: ModelObject

More dark magic created by the evil mastermind of this project. Basically a highly-customized KNN

Warning: if you are forecasting many steps (large forecast_length), and interested in probabilistic upper/lower forecasts, then set recency_weighting <= 0, and have a larger cutoff_minimum

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • phrase_len (int) – length of motif vectors to compare as samples

  • comparison (str) – method to process data before comparison, ‘magnitude’ is original data

  • shared (bool) – whether to compare motifs across all series together, or separately

  • distance_metric (str) – passed through to sklearn pairwise_distances

  • max_motifs (float) – number of motifs to compare per series. If less 1, used as % of length training data

  • recency_weighting (float) – amount to the value of more recent data.

  • cutoff_threshold (float) – lowest value of distance metric to allow into forecast

  • cutoff_minimum (int) – minimum number of motif vectors to include in forecast.

  • point_method (str) – summarization method to choose forecast on, ‘sample’, ‘mean’, ‘sign_biased_mean’, ‘median’

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.NVAR(name: str = 'NVAR', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, k: int = 1, ridge_param: float = 2.5e-06, warmup_pts: int = 1, seed_pts: int = 1, seed_weighted: str | None = None, batch_size: int = 5, batch_method: str = 'input_order', **kwargs)

Bases: ModelObject

Nonlinear Variable Autoregression or ‘Next-Generation Reservoir Computing’

based on https://github.com/quantinfo/ng-rc-paper-code/ Gauthier, D.J., Bollt, E., Griffith, A. et al. Next generation reservoir computing. Nat Commun 12, 5564 (2021). https://doi.org/10.1038/s41467-021-25801-2 with adjustments to make it probabilistic and to scale better

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • k (int) – the AR order (keep this small, larger is slow and usually pointless)

  • ridge_param (float) – standard lambda for ridge regression

  • warmup_pts (int) – in reality, passing 1 here (no warmup) is fine

  • batch_size (int) – nvar scales exponentially, to scale linearly, series are split into batches of size n

  • batch_method (str) – method for collecting series to make batches

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.SeasonalNaive(name: str = 'SeasonalNaive', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, lag_1: int = 7, lag_2: int | None = None, method: str = 'lastvalue', **kwargs)

Bases: ModelObject

Naive forecasting predicting a dataframe with seasonal (lag) forecasts.

Concerto No. 2 in G minor, Op. 8, RV 315

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • method (str) – Either ‘LastValue’ (use last value of lag n) or ‘Mean’ (avg of all lag n)

  • lag_1 (int) – The lag of the seasonality, should int > 1.

  • lag_2 (int) – Optional second lag of seasonality which is averaged with first lag to produce forecast.

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast: bool = False)

Generate forecast data immediately following dates of .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.SeasonalityMotif(name: str = 'SeasonalityMotif', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, regression_type: str | None = None, window: int = 5, point_method: str = 'mean', distance_metric: str = 'mae', k: int = 10, datepart_method: str = 'common_fourier', independent: bool = False, **kwargs)

Bases: ModelObject

Forecasts using a nearest neighbors type model adapted for probabilistic time series. This version is fully vectorized, using basic metrics for distance comparison.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • window (int) – length of forecast history to match on

  • point_method (int) – how to summarize the nearest neighbors to generate the point forecast “weighted_mean”, “mean”, “median”, “midhinge”

  • distance_metric (str) – mae, mqae, mse

  • k (int) – number of closest neighbors to consider

  • independent (bool) – if True, each time step is separate. This is the one motif that can then handle large forecast_length to short historical data.

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • regressor (numpy.Array) – additional regressor

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.SectionalMotif(name: str = 'SectionalMotif', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, regression_type: str | None = None, window: int = 5, point_method: str = 'weighted_mean', distance_metric: str = 'nan_euclidean', include_differenced: bool = False, k: int = 10, stride_size: int = 1, fillna: str = 'SimpleSeasonalityMotifImputer', comparison_transformation: dict | None = None, combination_transformation: dict | None = None, **kwargs)

Bases: ModelObject

Forecasts using a nearest neighbors type model adapted for probabilistic time series. This version takes the distance metric average for all series at once.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – “User” or None. If used, will be as covariate. The ratio of num_series:num_regressor_series will largely determine the impact

  • window (int) – length of forecast history to match on

  • point_method (int) – how to summarize the nearest neighbors to generate the point forecast “weighted_mean”, “mean”, “median”, “midhinge”

  • distance_metric (str) – all valid values for scipy cdist + “nan_euclidean” from sklearn

  • include_differenced (bool) – True to have the distance metric result be an average of the distance on absolute values as well as differenced values

  • k (int) – number of closest neighbors to consider

  • stride_size (int) – how many obs to skip between each new window. Higher numbers will reduce the number of matching windows and make the model faster.

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • regressor (numpy.Array) – additional regressor

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.basics.TVVAR(name: str = 'TVVAR', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, regression_type: str | None = None, datepart_method: str = 'common_fourier', changepoint_spacing: int | None = None, changepoint_distance_end: int | None = None, lambda_: float = 0.01, phi: float | None = None, max_cycles: int = 2000, trend_phi: float | None = None, var_dampening: float | None = None, lags: list | None = None, rolling_means: list | None = None, apply_pca: bool = False, pca_n_components: float = 0.95, threshold_method: str = 'std', threshold_value: float | None = None, base_scaled: bool = True, x_scaled: bool = False, var_preprocessing: dict = False, var_postprocessing: dict = False, mode: str = 'additive', holiday_countries_used: bool = True, **kwargs)

Bases: BasicLinearModel

Time Varying VAR

Notes

var_preprocessing will fail with many options, anything that scales/shifts the space x_scaled=True seems to fail often when base_scaled=False and VAR components used

apply_beta_threshold(beta=None)
create_VAR_features(df)
empty_scaler(df)
fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

process_components()

Return components. Does not account for dampening.

autots.models.basics.ZeroesNaive

alias of ConstantNaive

autots.models.basics.looped_motif(Xa, Xb, name, r_arr=None, window=10, distance_metric='minkowski', k=10, point_method='mean', prediction_interval=0.9, return_result_windows=False)

inner function for Motif model.

autots.models.basics.predict_reservoir(df, forecast_length, prediction_interval=None, warmup_pts=1, k=2, ridge_param=2.5e-06, seed_pts: int = 1, seed_weighted: str | None = None)

Nonlinear Variable Autoregression or ‘Next-Generation Reservoir Computing’

based on https://github.com/quantinfo/ng-rc-paper-code/ Gauthier, D.J., Bollt, E., Griffith, A. et al. Next generation reservoir computing. Nat Commun 12, 5564 (2021). https://doi.org/10.1038/s41467-021-25801-2 with adjustments to make it probabilistic

This is very slow and memory hungry when n series/dimensions gets big (ie > 50). Already effectively parallelized by linpack it seems. It’s very sensitive to error in most recent data point! The seed_pts and seed_weighted can help address that.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • k (int) – the AR order (keep this small, larger is slow and usually pointless)

  • ridge_param (float) – standard lambda for ridge regression

  • warmup_pts (int) – in reality, passing 1 here (no warmup) is fine

  • seed_pts (int) – number of back steps to use to simulate future if > 10, also increases search space of probabilistic forecast

  • seed_weighted (str) – how to summarize most recent points if seed_pts > 1

autots.models.cassandra module

Cassandra Model. Created on Tue Sep 13 19:45:57 2022

@author: Colin with assistance from @crgillespie22

class autots.models.cassandra.BayesianMultiOutputRegression(gaussian_prior_mean=0, alpha=1.0, wishart_prior_scale=1.0, wishart_dof_excess=0)

Bases: object

Bayesian Linear Regression, conjugate prior update.

Parameters:
  • gaussian_prior_mean (float) – mean of prior, a small positive value can encourage positive coefs which make better component plots

  • alpha (float) – prior scale of gaussian covariance, effectively a regularization term

  • wishart_dof_excess (int) – Larger values make the prior more peaked around the scale matrix.

  • wishart_prior_scale (float) – A larger value means a smaller prior variance on the noise covariance, while a smaller value means more prior uncertainty about it.

fit(X, Y)
predict(X, return_std=False)
sample_posterior(n_samples=1)
class autots.models.cassandra.Cassandra(preprocessing_transformation: dict | None = None, scaling: str = 'BaseScaler', past_impacts_intervention: str | None = None, seasonalities: dict = ['common_fourier'], ar_lags: list | None = None, ar_interaction_seasonality: dict | None = None, anomaly_detector_params: dict | None = None, anomaly_intervention: str | None = None, holiday_detector_params: dict | None = None, holiday_countries: dict | None = None, holiday_countries_used: bool = True, multivariate_feature: str | None = None, multivariate_transformation: str | None = None, regressor_transformation: dict | None = None, regressors_used: bool = True, linear_model: dict | None = None, randomwalk_n: int | None = None, trend_window: int = 30, trend_standin: str | None = None, trend_anomaly_detector_params: dict | None = None, trend_transformation: dict = {}, trend_model: dict = {'Model': 'LastValueNaive', 'ModelParameters': {}}, trend_phi: float | None = None, constraint: dict | None = None, x_scaler: bool = False, max_colinearity: float = 0.998, max_multicolinearity: float = 0.001, frequency: str = 'infer', prediction_interval: float = 0.9, random_seed: int = 2022, verbose: int = 0, n_jobs: int = 'auto', forecast_length: int = 30, **kwargs)

Bases: ModelObject

Explainable decomposition-based forecasting with advanced trend modeling and preprocessing.

Tunc etiam fatis aperit Cassandra futuris ora, dei iussu non umquam credita Teucris. Nos delubra deum miseri, quibus ultimus esset ille dies, festa velamus fronde per urbem. -Aeneid 2.246-2.249

In general, all time series data inputs (df, regressors, impacts) should be wide style data in a pd.DataFrame

an index that is a pd.DatetimeIndex one column per time series, with a uniquely identifiable column name

Impacts get confusing. A past impact of 0.05 would mean an outside, unforecastable force caused/added 5% of the value at this time. Accordingly, that 5% will be removed before forecasting, then added back on after. Impacts can also be negative values. A future impact of 5% would mean an outside force adds 5% above the original forecast. Future impacts can be used to model product goals or temporary anomalies which can’t or should’t be modeled by forecasting and whose relative effect is known Compare this with regressors, which are essentially the model estimating the relative impact given the raw size or presence of an outside effect

Warn about remove_excess_anomalies from holiday detector if relying on anomaly prediction Linear components are always model elements, but trend is actuals (history) and model (future) Running predict updates some internal attributes used in plotting and other figures, generally expect to use functions to latest predict Seasonalities are hard-coded to be as days so 7 will always = weekly even if data isn’t daily For slope analysis and zero crossings, a slope of 0 evaluates as a positive sign (=>0). Exactly 0 slope is rare real world data Does not currently follow the regression_type=’User’ and fails if no regressor pattern of other models For component decomposition, scale will be inaccurate unless ‘BaseScaler’ is used, but regardless this won’t affect final forecast

Parameters:

pass

fit()
predict()
holiday_detector.dates_to_holidays()
create_forecast_index()

after .fit, can be used to create index of prediction

plot_forecast()
plot_components()
plot_trend()
get_new_params()
return_components()
.anomaly_detector.anomalies
.anomaly_detector.scores
.holiday_count
.holidays
Type:

series flags, holiday detector only

.params
.keep_cols, .keep_cols_idx
.x_array
.predict_x_array
.trend_train
.predicted_trend
analyze_trend(slope, index)
auto_fit(df, validation_method)
base_scaler(df)
compare_actual_components()
create_t(DTindex)
cross_validate(df, validation_method)
feature_importance()
fit(df, future_regressor=None, regressor_per_series=None, flag_regressors=None, categorical_groups=None, past_impacts=None)
fit_data(df, forecast_length=None, future_regressor=None, regressor_per_series=None, flag_regressors=None, future_impacts=None, regressor_forecast_model=None, regressor_forecast_model_params=None, regressor_forecast_transformations=None, include_history=False, past_impacts=None)
get_new_params(method='fast')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

next_fit()
plot_components(prediction=None, series=None, figsize=(16, 9), to_origin_space=True, title=None, start_date=None)

Plot breakdown of linear model components.

Parameters:
  • prediction – the forecast object

  • series (str) – name of series to plot, if desired

  • figsize (tuple) – figure size

  • to_origin_space (bool) – setting to False can make the graph look right due to preprocessing transformers, but to the wrong scale especially useful if “AlignLastValue” and other transformers present

  • title (str) – title

  • start_date (str) – slice point for start date, can make some high frequency components easier to see with a shorter window

plot_forecast(prediction, actuals=None, series=None, start_date=None, anomaly_color='darkslateblue', holiday_color='darkgreen', trend_anomaly_color='slategray', point_size=12.0)

Plot a forecast time series.

Parameters:
  • prediction (model prediction object, required) –

  • actuals (pd.DataFrame) – wide style df, of know data if available

  • series (str) – name of time series column to plot

  • start_date (str or Timestamp) – point at which to begin X axis

  • anomaly_color (str) – name of anomaly point color

  • holiday_color (str) – name of holiday point color

  • trend_anomaly_color (str) – name of trend anomaly point color

  • point_size (str) – point size for all anomalies

plot_things()
plot_trend(series=None, vline=None, colors=['#d4f74f', '#82ab5a', '#ff6c05', '#c12600'], title=None, start_date=None, **kwargs)
predict(forecast_length=None, include_history=False, future_regressor=None, regressor_per_series=None, flag_regressors=None, future_impacts=None, new_df=None, regressor_forecast_model=None, regressor_forecast_model_params=None, regressor_forecast_transformations=None, include_organic=False, df=None, past_impacts=None)

Generate a forecast.

future_regressor and regressor_per_series should only include new future values, history is already stored they should match on forecast_length and index of forecasts

Parameters:
  • forecast_length (int) – steps ahead to predict, or None

  • include_history (bool) – include past predictions if True

  • .fit (all the same regressor args as) –

  • here (but future forecast versions) –

  • future_impacts (pd.DataFrame) – like past impacts but for the forecast ahead

  • new_df (pd.DataFrame) – or df, equivalent to fit_data update

predict_new_product()
process_components(to_origin_space=True)

Scale and standardize component outputs.

return_components(to_origin_space=True, include_impacts=False)

Return additive elements of forecast, linear and trend. If impacts included, it is a multiplicative term.

Parameters:
  • to_origin_space (bool) –

  • include_impacts (bool) –

rolling_trend(trend_residuals, t)
scale_data(df)
to_origin_space(df, trans_method='forecast', components=False, bounds=False)

Take transformed outputs back to original feature space.

treatment_causal_impact(df, intervention_dates)
trend_analysis()
autots.models.cassandra.clean_regressor(in_d, prefix='regr_')
autots.models.cassandra.cost_function_dwae(params, X, y)
autots.models.cassandra.cost_function_l1(params, X, y)
autots.models.cassandra.cost_function_l1_positive(params, X, y)
autots.models.cassandra.cost_function_l2(params, X, y)
autots.models.cassandra.cost_function_quantile(params, X, y, q=0.9)
autots.models.cassandra.create_t(ds)
autots.models.cassandra.fit_linear_model(x, y, params=None)
autots.models.cassandra.lstsq_minimize(X, y, maxiter=15000, cost_function='l1', method=None)

Any cost function version of lin reg.

autots.models.cassandra.lstsq_solve(X, y, lamb=1, identity_matrix=None)

autots.models.dnn module

Neural Nets.

class autots.models.dnn.ElasticNetwork(size: int = 256, l1: float = 0.01, l2: float = 0.02, feature_subsample_rate: float | None = None, optimizer: str = 'adam', loss: str = 'mse', epochs: int = 20, batch_size: int = 32, activation: str = 'relu', verbose: int = 1, random_seed: int = 2024)

Bases: object

fit(X, y)
predict(X)
class autots.models.dnn.KerasRNN(rnn_type: str = 'LSTM', kernel_initializer: str = 'lecun_uniform', hidden_layer_sizes: tuple = (32, 32, 32), optimizer: str = 'adam', loss: str = 'huber', epochs: int = 50, batch_size: int = 32, shape=1, verbose: int = 1, random_seed: int = 2020)

Bases: object

Wrapper for Tensorflow Keras based RNN.

Parameters:
  • rnn_type (str) – Keras cell type ‘GRU’ or default ‘LSTM’

  • kernel_initializer (str) – passed to first keras LSTM or GRU layer

  • hidden_layer_sizes (tuple) – of len 1 or 3 passed to first keras LSTM or GRU layers

  • optimizer (str) – Passed to keras model.compile

  • loss (str) – Passed to keras model.compile

  • epochs (int) – Passed to keras model.fit

  • batch_size (int) – Passed to keras model.fit

  • verbose (int) – 0, 1 or 2. Passed to keras model.fit

  • random_seed (int) – passed to tf.random.set_seed()

fit(X, Y)

Train the model on dataframes of X and Y.

predict(X)

Predict on dataframe of X.

class autots.models.dnn.Transformer(head_size=256, num_heads=4, ff_dim=4, num_transformer_blocks=4, mlp_units=[128], mlp_dropout=0.4, dropout=0.25, optimizer: str = 'adam', loss: str = 'huber', epochs: int = 50, batch_size: int = 32, verbose: int = 1, random_seed: int = 2020)

Bases: object

Wrapper for Tensorflow Keras based Transformer.

based on: https://keras.io/examples/timeseries/timeseries_transformer_classification/

Parameters:
  • optimizer (str) – Passed to keras model.compile

  • loss (str) – Passed to keras model.compile

  • epochs (int) – Passed to keras model.fit

  • batch_size (int) – Passed to keras model.fit

  • verbose (int) – 0, 1 or 2. Passed to keras model.fit

  • random_seed (int) – passed to tf.random.set_seed()

fit(X, Y)

Train the model on dataframes of X and Y.

predict(X)

Predict on dataframe of X.

autots.models.dnn.transformer_build_model(input_shape, output_shape, head_size, num_heads, ff_dim, num_transformer_blocks, mlp_units, dropout=0, mlp_dropout=0)
autots.models.dnn.transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0)

autots.models.ensemble module

Tools for generating and forecasting with ensembles of models.

autots.models.ensemble.BestNEnsemble(ensemble_params, forecasts, lower_forecasts, upper_forecasts, forecasts_runtime: dict, prediction_interval: float = 0.9)

Generate mean forecast for ensemble of models.

model_weights and point_methods other than ‘mean’ are incompatible

Parameters:
  • ensemble_params (dict) – BestN ensemble param dict should have “model_weights”: {model_id: weight} where 1 is default weight per model

  • forecasts (dict) – {forecast_id: forecast dataframe} for all models same for lower_forecasts, upper_forecasts

  • forecast_runtime (dict) – dictionary of {forecast_id: timedelta of runtime}

  • prediction_interval (float) – metadata on interval

autots.models.ensemble.DistEnsemble(ensemble_params, forecasts_list, forecasts, lower_forecasts, upper_forecasts, forecasts_runtime, prediction_interval)

Generate forecast for distance ensemble.

autots.models.ensemble.EnsembleForecast(ensemble_str, ensemble_params, forecasts_list, forecasts, lower_forecasts, upper_forecasts, forecasts_runtime, prediction_interval, df_train=None, prematched_series: dict | None = None)

Return PredictionObject for given ensemble method.

autots.models.ensemble.EnsembleTemplateGenerator(initial_results, forecast_length: int = 14, ensemble: str = 'simple', score_per_series=None, use_validation=False)

Generate class 1 (non-horizontal) ensemble templates given a table of results.

autots.models.ensemble.HDistEnsemble(ensemble_params, forecasts_list, forecasts, lower_forecasts, upper_forecasts, forecasts_runtime, prediction_interval)

Generate forecast for per_series per distance ensembling.

autots.models.ensemble.HorizontalEnsemble(ensemble_params, forecasts_list, forecasts, lower_forecasts, upper_forecasts, forecasts_runtime, prediction_interval, df_train=None, prematched_series: dict | None = None)

Generate forecast for per_series ensembling.

autots.models.ensemble.HorizontalTemplateGenerator(per_series, model_results, forecast_length: int = 14, ensemble: str = 'horizontal', subset_flag: bool = True, per_series2=None, only_specified: bool = False)

Generate horizontal ensemble templates given a table of results.

autots.models.ensemble.MosaicEnsemble(ensemble_params, forecasts_list, forecasts, lower_forecasts, upper_forecasts, forecasts_runtime, prediction_interval, df_train=None, prematched_series: dict | None = None)

Generate forecast for mosaic ensembling.

Parameters:

prematched_series (dict) – from outer horizontal generalization, possibly different than params

autots.models.ensemble.create_unpredictability_score(full_mae_errors, full_mae_vals, total_vals, df_wide, validation_test_indexes, scale=False)
autots.models.ensemble.find_pattern(strings, x, sep='-')
autots.models.ensemble.generalize_horizontal(df_train, known_matches: dict, available_models: list, full_models: list | None = None)

generalize a horizontal model trained on a subset of all series

Parameters:
  • df_train (pd.DataFrame) – time series data

  • known_matches (dict) – series:model dictionary for some to all series

  • available_models (dict) – list of models actually available

  • full_models (dict) – models that are available for every single series

autots.models.ensemble.generate_crosshair_score(error_matrix, method=None)
autots.models.ensemble.generate_crosshair_score_list(error_list)
autots.models.ensemble.generate_mosaic_template(initial_results, full_mae_ids, num_validations, col_names, full_mae_errors, smoothing_window=None, metric_name='MAE', models_to_use=None, id_to_group_mapping: dict | None = None, filtered: bool = False, unpredictability_adjusted: bool = False, validation_test_indexes=None, full_mae_vals=None, df_wide=None, **kwargs)

Generate an ensemble template from results.

autots.models.ensemble.horizontal_classifier(df_train, known: dict, method: str = 'whatever', classifier_params=None)

CLassify unknown series with the appropriate model for horizontal ensembling.

Parameters:
  • df_train (pandas.DataFrame) – historical data about the series. Columns = series_ids.

  • known (dict) – dict of series_id: classifier outcome including some but not all series in df_train.

Returns:

dict.

autots.models.ensemble.horizontal_xy(df_train, known)

Construct X, Y, X_predict features for generalization.

autots.models.ensemble.is_horizontal(ensemble_list)
autots.models.ensemble.is_mosaic(ensemble_list)
autots.models.ensemble.mlens_helper(models, models_source='bestn')
autots.models.ensemble.mosaic_classifier(df_train, known, classifier_params=None)

CLassify unknown series with the appropriate model for mosaic ensembles.

autots.models.ensemble.mosaic_or_horizontal(all_series: dict)

Take a mosaic or horizontal model and return series or models.

Parameters:

all_series (dict) – dict of series: model (or list of models)

autots.models.ensemble.mosaic_to_horizontal(ModelParameters, forecast_period: int = 0)

Take a mosaic template and pull a single forecast step as a horizontal model.

Parameters:
  • ModelParameters (dict) – the json.loads() of the ModelParameters of a mosaic ensemble template

  • forecast_period (int) – when to choose the model, starting with 0 where 0 would be the first forecast datestamp, 1 would be the second, and so on must be less than forecast_length that the model was trained on.

Returs:

ModelParameters (dict)

autots.models.ensemble.mosaic_xy(df_train, known)
autots.models.ensemble.n_limited_horz(per_series, K, safety_model=False)
autots.models.ensemble.parse_forecast_length(forecast_length)
autots.models.ensemble.parse_horizontal(all_series: dict, model_id: str | None = None, series_id: str | None = None)

Take a mosaic or horizontal model and return series or models.

Parameters:
  • all_series (dict) – dict of series: model (or list of models)

  • model_id (str) – name of model to find series for

  • series_id (str) – name of series to find models for

Returns:

list

autots.models.ensemble.parse_mosaic(ensemble)
autots.models.ensemble.process_mosaic_arrays(local_results, full_mae_ids, full_mae_errors, total_vals=None, models_to_use=None, smoothing_window=None, filtered=False, unpredictability_adjusted=False, validation_test_indexes=None, full_mae_vals=None, df_wide=None)

autots.models.gluonts module

GluonTS

Best neuralnet models currently available, released by Amazon, scale well. Except it is really the only thing I use that runs mxnet, and it takes a while to train these guys… And MXNet is now sorta-maybe-deprecated? Which is sad because it had excellent CPU-based training speed.

Note that there are routinely package version issues with this and its dependencies. Stability is not the strong suit of GluonTS.

class autots.models.gluonts.GluonTS(name: str = 'GluonTS', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, gluon_model: str = 'DeepAR', epochs: int = 20, learning_rate: float = 0.001, context_length=10, forecast_length: int = 14, **kwargs)

Bases: ModelObject

GluonTS based on mxnet.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – Not yet implemented

  • gluon_model (str) – Model Structure to Use - [‘DeepAR’, ‘NPTS’, ‘DeepState’, ‘WaveNet’,’DeepFactor’, ‘Transformer’,’SFF’, ‘MQCNN’, ‘DeepVAR’, ‘GPVAR’, ‘NBEATS’]

  • epochs (int) – Number of neural network training epochs. Higher generally results in better, then over fit.

  • learning_rate (float) – Neural net training parameter

  • context_length (str) – int window, ‘2ForecastLength’, or ‘nForecastLength’

  • forecast_length (int) – Length to forecast. Unlike in other methods, this must be provided before fitting model

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

fit_data(df, future_regressor=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=[], just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.greykite module

Greykite.

class autots.models.greykite.Greykite(name: str = 'Greykite', frequency: str = 'infer', prediction_interval: float = 0.9, holiday: bool = False, growth: str | None = None, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int | None = None)

Bases: ModelObject

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • holiday (bool) – If true, include holidays

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=[])

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=[], just_point_forecast: bool = False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.greykite.seek_the_oracle(df_index, series, col, forecast_length, freq, prediction_interval=0.9, model_template='silverkite', growth=None, holiday=True, holiday_country='UnitedStates', regressors=None, verbose=0, inner_n_jobs=1, **kwargs)

Internal. For loop or parallel version of Greykite.

autots.models.matrix_var module

VAR models based on matrix factorization and related methods.

Heavily borrowing on the work of Xinyu Chen See https://github.com/xinychen/transdim and corresponding Medium articles

Thrown around a lot of np.nan_to_num before pinv to prevent the following crash: On entry to DLASCL parameter number 4 had an illegal value

class autots.models.matrix_var.DMD(name: str = 'DMD', frequency: str = 'infer', prediction_interval: float = 0.9, alpha: float = 0.0, rank: float = 0.1, amplitude_threshold: float | None = None, eigenvalue_threshold: float | None = None, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Dynamic Mode Decomposition

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.matrix_var.LATC(name: str = 'LATC', frequency: str = 'infer', prediction_interval: float = 0.9, time_horizon: float = 1, seasonality: int = 7, time_lags: list = [1], lambda0: float = 1, learning_rate: float = 1, theta: float = 1, window: int = 30, epsilon: float = 0.0001, alpha: list = [0.33333333, 0.33333333, 0.33333333], maxiter: int = 100, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Low Rank Autoregressive Tensor Completion. Based on https://arxiv.org/abs/2104.14936 and https://github.com/xinychen/tensor-learning/blob/master/mats/LATC-predictor.ipynb rho: learning rate lambda: weight parameter

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.matrix_var.MAR(name: str = 'MAR', frequency: str = 'infer', prediction_interval: float = 0.9, seasonality: float = 7, family: str = 'gaussian', maxiter: int = 200, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Matrix Autoregressive model based on the code of Xinyu Chen.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.matrix_var.RRVAR(name: str = 'RRVAR', frequency: str = 'infer', prediction_interval: float = 0.9, method: str = 'als', rank: float = 0.1, maxiter: int = 200, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Reduced Rank VAR models based on the code of Xinyu Chen.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.matrix_var.TMF(name: str = 'TMF', frequency: str = 'infer', prediction_interval: float = 0.9, d: int = 1, lambda0: float = 1, rho: float = 1, rank: float = 0.4, maxiter: int = 100, inner_maxiter: int = 10, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Temporal Matrix Factorization VAR model based on the code of Xinyu Chen.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.matrix_var.conj_grad_w(sparse_mat, ind, W, X, rho, maxiter=5)
autots.models.matrix_var.conj_grad_x(sparse_mat, ind, W, X, A, Psi, d, lambda0, rho, maxiter=5)
autots.models.matrix_var.dmd(data, r)

Dynamic Mode Decomposition (DMD) algorithm.

autots.models.matrix_var.dmd4cast(data, r, pred_step)
autots.models.matrix_var.dmd_forecast(data, r, pred_step, alpha=0.0, amplitude_threshold=None, eigenvalue_threshold=None)
autots.models.matrix_var.ell_w(ind, W, X, rho)
autots.models.matrix_var.ell_x(ind, W, X, A, Psi, d, lambda0, rho)
autots.models.matrix_var.generate_Psi(T, d)
autots.models.matrix_var.latc_imputer(sparse_tensor, time_lags, alpha, rho0, lambda0, theta, epsilon, maxiter)

Low-Rank Autoregressive Tensor Completion, LATC-imputer. Recognizes 0 as NaN.

autots.models.matrix_var.latc_predictor(sparse_mat, pred_time_steps, time_horizon, time_intervals, time_lags, alpha, rho, lambda0, theta, window, epsilon, maxiter)

LATC-predictor kernel.

autots.models.matrix_var.mar(X, pred_step, family='gaussian', maxiter=100)
autots.models.matrix_var.mat2ten(mat, dim, mode)
autots.models.matrix_var.rrvar(data, R, pred_step, maxiter=100)

Reduced-rank VAR algorithm using ALS.

autots.models.matrix_var.svt_tnn(mat, tau, theta)
autots.models.matrix_var.ten2mat(tensor, mode)
autots.models.matrix_var.tmf(sparse_mat, rank, d, lambda0, rho, maxiter=50, inner_maxiter=10)
autots.models.matrix_var.update_cg(var, r, q, Aq, rold)
autots.models.matrix_var.var(X, pred_step)

Simple VAR.

autots.models.matrix_var.var4cast(X, A, d, delta)

autots.models.mlensemble module

Created on Sun Jan 15 19:28:57 2023

@author: Colin

class autots.models.mlensemble.MLEnsemble(name: str = 'MLEnsemble', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, forecast_length: int = 10, regression_type: str | None = None, regression_model=None, models=[{'Model': 'Cassandra', 'ModelParameters': {}, 'TransformationParameters': {}}, {'Model': 'MetricMotif', 'ModelParameters': {}, 'TransformationParameters': {}}, {'Model': 'SeasonalityMotif', 'ModelParameters': {}, 'TransformationParameters': {}}], num_validations=2, validation_method='backwards', min_allowed_train_percent=0.5, datepart_method='expanded_binarized', models_source: str = 'random', **kwargs)

Bases: ModelObject

Combine models using an ML model across validations.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • regressor (numpy.Array) – additional regressor

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.mlensemble.create_feature(df_train, models, forecast_length, future_regressor_train=None, future_regressor_forecast=None, datepart_method=None)

autots.models.model_list module

Lists of models grouped by aspects.

autots.models.model_list.auto_model_list(n_jobs, n_series, frequency)
autots.models.model_list.model_list_to_dict(model_list)

Convert various possibilities to dict.

autots.models.neural_forecast module

Nixtla’s NeuralForecast. Be warned, as of writing, their package has commercial restrictions.

class autots.models.neural_forecast.NeuralForecast(name: str = 'NeuralForecast', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2023, verbose: int = 0, forecast_length: int = 28, regression_type: str | None = None, n_jobs: int = 1, model='LSTM', loss='MQLoss', input_size='2ForecastLength', max_steps=100, learning_rate=0.001, early_stop_patience_steps=-1, activation='ReLU', scaler_type='robust', model_args={}, point_quantile=None, **kwargs)

Bases: ModelObject

See NeuralForecast documentation for details.

temp[‘ModelParameters’].str.extract(‘model”: “([a-zA-Z]+)’)

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type – str = None,

  • model (str or object) – string aliases or passed to the models arg of NeuralForecast

  • model_args (dict) – for all model args that aren’t in default list, run get_new_params for default

fit(df, future_regressor=None, static_regressor=None, regressor_per_series=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length=None, future_regressor=None, just_point_forecast=False, regressor_per_series=None)

autots.models.prophet module

Facebook’s Prophet

Since Prophet install can be finicky on Windows, it will be an optional dependency.

class autots.models.prophet.FBProphet(name: str = 'FBProphet', frequency: str = 'infer', prediction_interval: float = 0.9, holiday: bool = False, regression_type: str | None = None, holiday_country: str = 'US', yearly_seasonality='auto', weekly_seasonality='auto', daily_seasonality='auto', growth: str = 'linear', n_changepoints: int = 25, changepoint_prior_scale: float = 0.05, seasonality_mode: str = 'additive', changepoint_range: float = 0.8, changepoint_spacing: int = 60, seasonality_prior_scale: float = 10.0, weekly_seasonality_prior_scale: float | None = None, yearly_seasonality_prior_scale: float | None = None, yearly_seasonality_order: int | None = None, holidays_prior_scale: float = 10.0, trend_phi: float = 1, random_seed: int = 2024, verbose: int = 0, n_jobs: int | None = None)

Bases: ModelObject

Facebook’s Prophet

‘thou shall count to 3, no more, no less, 3 shall be the number thou shall count, and the number of the counting shall be 3. 4 thou shall not count, neither count thou 2, excepting that thou then preceed to 3.’ -Python

For params see: https://facebook.github.io/prophet/docs/diagnostics.html

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • holiday (bool) – If true, include holidays

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast: bool = False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.prophet.NeuralProphet(name: str = 'NeuralProphet', frequency: str = 'infer', prediction_interval: float = 0.9, holiday: bool = False, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int | None = None, growth: str = 'off', n_changepoints: int = 10, changepoints_range: float = 0.9, trend_reg: float = 0, trend_reg_threshold: bool = False, ar_sparsity: float | None = None, yearly_seasonality: str = 'auto', weekly_seasonality: str = 'auto', daily_seasonality: str = 'auto', seasonality_mode: str = 'additive', seasonality_reg: float = 0, n_lags: int = 0, num_hidden_layers: int = 0, d_hidden: int | None = None, learning_rate: float | None = None, loss_func: str = 'Huber', train_speed: int | None = None, normalize: str = 'auto')

Bases: ModelObject

Facebook’s Prophet got caught in a net.

n_jobs is implemented here but it should be set to 1. PyTorch already maxes out cores in all observed cases.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • holiday (bool) – If true, include holidays

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast: bool = False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.prophet.get_changepoints(training_start_ds, training_end_ds, changepoint_spacing, changepoint_distance_end, custom_changepoints='')

Create the distinct uniform pattern used in Ecosystem Analytics This is likely to be replaced in the future. It was likely designed the way is was to have exact control over where the last potential changepoint could be, which is around the 93% mark of the data, and then setting a relatively course, uniform grid going way back in time. (sourced from the work of Benn O) :param training_start_ds: Pandas datetime or string date of earliest

historical training date

Parameters:
  • training_end_ds – Pandas datetime or string date of last training date (used in model fitting)

  • changepoint_spacing (int) – Number of days between potential changepoints

  • changepoint_distance_end (int) – Number of days from the present into the past for which to start the uniform changepoint grid. This will also be the interval between potential changepoints

  • custom_changepoints (string) – comma separated dates in form YYYY-MM-DD. No additional quotations are necessary (e.g., “2020-10-12,2020-11-15”)

Returns:

datetime64[ns]) of potential changepoints for Prophet

Return type:

a pandas Series (dtype

autots.models.pytorch module

Created on Tue May 24 13:32:12 2022

@author: Colin

class autots.models.pytorch.PytorchForecasting(name: str = 'PytorchForecasting', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, n_jobs: int = 1, forecast_length: int = 90, max_epochs: int = 100, batch_size: int = 128, max_encoder_length: int = 12, learning_rate: float = 0.03, hidden_size: int = 32, n_layers: int = 2, dropout: float = 0.1, datepart_method: str = 'simple', add_target_scales: bool = False, lags: dict = {}, target_normalizer: str = 'EncoderNormalizer', model: str = 'TemporalFusionTransformer', quantiles: list = [0.01, 0.1, 0.22, 0.36, 0.5, 0.64, 0.78, 0.9, 0.99], model_kwargs: dict = {}, trainer_kwargs: dict = {}, callbacks: list | None = None, **kwargs)

Bases: ModelObject

pytorch-forecasting for the world’s over-obsession of neural nets.

This is generally going to require more data than most other models.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • model_kwargs (dict) – passed to pytorch-forecasting model on creation (for those not already defined above)

  • trainer_kwargs (dict) – passed to pt lightning Trainer

  • callbacks (list) – pt lightning callbacks

  • quantiles (list) – [0.1, 0.5, 0.9] or similar for quantileloss models

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed, wide style data

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead must be equal or lesser to that specified in init

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.sklearn module

Sklearn dependent models

Decision Tree, Elastic Net, Random Forest, MLPRegressor, KNN, Adaboost

class autots.models.sklearn.ComponentAnalysis(name: str = 'ComponentAnalysis', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_components: int = 10, forecast_length: int = 14, model: str = 'GLS', model_parameters: dict = {}, decomposition: str = 'PCA', n_jobs: int = -1)

Bases: ModelObject

Forecasting on principle components.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • model (str) – An AutoTS model str

  • model_parameters (dict) – parameters to pass to AutoTS model

  • n_components (int) – int or ‘NthN’ number of components to use

  • decomposition (str) – decomposition method to use from scikit-learn

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast: bool = False)

Generate forecast data immediately following dates of .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.sklearn.DatepartRegression(name: str = 'DatepartRegression', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, forecast_length: int = 1, n_jobs: int | None = None, regression_model: dict = {'model': 'DecisionTree', 'model_params': {'max_depth': 5, 'min_samples_split': 2}}, datepart_method: str = 'expanded', polynomial_degree: int | None = None, holiday_countries_used: bool = False, lags: int | None = None, forward_lags: int | None = None, regression_type: str | None = None, **kwargs)

Bases: ModelObject

Regression not on series but datetime

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

fit(df, future_regressor=None, static_regressor=None, regressor_per_series=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

fit_data(df, future_regressor=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast: bool = False, df=None, regressor_per_series=None)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • future_regressor (pandas.DataFrame or Series) – Datetime Indexed

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.sklearn.MultivariateRegression(name: str = 'MultivariateRegression', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', verbose: int = 0, random_seed: int = 2020, forecast_length: int = 28, regression_model: dict = {'model': 'RandomForest', 'model_params': {}}, holiday: bool = False, mean_rolling_periods: int = 30, macd_periods: int | None = None, std_rolling_periods: int = 7, max_rolling_periods: int = 7, min_rolling_periods: int = 7, ewm_var_alpha: float | None = None, quantile90_rolling_periods: int | None = None, quantile10_rolling_periods: int | None = None, ewm_alpha: float = 0.5, additional_lag_periods: int | None = None, abs_energy: bool = False, rolling_autocorr_periods: int | None = None, nonzero_last_n: int | None = None, datepart_method: str | None = None, polynomial_degree: int | None = None, window: int = 5, probabilistic: bool = False, scale_full_X: bool = False, quantile_params: dict = {'learning_rate': 0.1, 'max_depth': 20, 'min_samples_leaf': 4, 'min_samples_split': 5, 'n_estimators': 250}, cointegration: str | None = None, cointegration_lag: int = 1, series_hash: bool = False, frac_slice: float | None = None, n_jobs: int = -1, **kwargs)

Bases: ModelObject

Regression-framed approach to forecasting using sklearn. A multiariate version of rolling regression: ie each series is lagged independently but modeled together

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • holiday (bool) – If true, include holiday flags

  • regression_type (str) – type of regression (None, ‘User’)

base_scaler(df)
fit(df, future_regressor=None, static_regressor=None, regressor_per_series=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • future_regressor (pandas.DataFrame or Series) – Datetime Indexed

fit_data(df, future_regressor=None, static_regressor=None, regressor_per_series=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, just_point_forecast: bool = False, future_regressor=None, df=None, regressor_per_series=None)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead ignored here for this model, must be set in __init__ before .fit()

  • future_regressor (pd.DataFrame) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

scale_data(df)
to_origin_space(df, trans_method='forecast', components=False, bounds=False)

Take transformed outputs back to original feature space.

class autots.models.sklearn.PreprocessingRegression(name: str = 'PreprocessingRegression', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2023, verbose: int = 0, window_size: int = 10, regression_model: dict = {'model': 'RandomForest', 'model_params': {}}, transformation_dict=None, max_history: int | None = None, one_step: bool = False, processed_y: bool = False, normalize_window: bool = False, datepart_method: str = 'common_fourier', forecast_length: int = 28, regression_type: str | None = None, n_jobs: int = -1, **kwargs)

Bases: ModelObject

Regression use the last n values as the basis of training data.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (#) – str = None,

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

fit_data(df, future_regressor=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast: bool = False, df=None)

Generate forecast data immediately following dates of .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.sklearn.RandomFourierEncoding(n_components=100, sigma=1.0, random_state=None)

Bases: object

fit(X, y=None)
transform(X)
class autots.models.sklearn.RollingRegression(name: str = 'RollingRegression', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', verbose: int = 0, random_seed: int = 2020, regression_model: dict = {'model': 'ExtraTrees', 'model_params': {}}, holiday: bool = False, mean_rolling_periods: int = 30, macd_periods: int | None = None, std_rolling_periods: int = 7, max_rolling_periods: int = 7, min_rolling_periods: int = 7, ewm_var_alpha: int | None = None, quantile90_rolling_periods: int | None = None, quantile10_rolling_periods: int | None = None, ewm_alpha: float = 0.5, additional_lag_periods: int = 7, abs_energy: bool = False, rolling_autocorr_periods: int | None = None, nonzero_last_n: int | None = None, add_date_part: str | None = None, polynomial_degree: int | None = None, x_transform: str | None = None, window: int | None = None, n_jobs: int = -1, **kwargs)

Bases: ModelObject

General regression-framed approach to forecasting using sklearn.

Who are you who are so wise in the ways of science? I am Arthur, King of the Britons. -Python

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • holiday (bool) – If true, include holiday flags

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • future_regressor (pandas.DataFrame or Series) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast: bool = False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.sklearn.UnivariateRegression(name: str = 'UnivariateRegression', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', verbose: int = 0, random_seed: int = 2020, forecast_length: int = 7, regression_model: dict = {'model': 'ExtraTrees', 'model_params': {}}, holiday: bool = False, mean_rolling_periods: int = 30, macd_periods: int | None = None, std_rolling_periods: int = 7, max_rolling_periods: int = 7, min_rolling_periods: int = 7, ewm_var_alpha: float | None = None, ewm_alpha: float = 0.5, additional_lag_periods: int = 7, abs_energy: bool = False, rolling_autocorr_periods: int | None = None, add_date_part: str | None = None, polynomial_degree: int | None = None, x_transform: str | None = None, window: int | None = None, n_jobs: int = -1, **kwargs)

Bases: ModelObject

Regression-framed approach to forecasting using sklearn. A univariate version of rolling regression: ie each series is modeled independently

“You’ve got to think for your selves!. You’re ALL individuals” “Yes! We’re all individuals!” - Python

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • holiday (bool) – If true, include holiday flags

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:
  • df (pandas.DataFrame) – Datetime Indexed

  • future_regressor (pandas.DataFrame or Series) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, just_point_forecast: bool = False, future_regressor=None)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead ignored here for this model, must be set in __init__ before .fit()

  • future_regressor (pd.DataFrame) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.sklearn.VectorizedMultiOutputGPR(kernel='rbf', noise_var=10, gamma=0.1, lambda_prime=0.1, p=7)

Bases: object

Gaussian Process Regressor.

Parameters:
  • kernel (str) – linear, polynomial, rbf, periodic, locally_periodic, exponential

  • noise_var (float) – noise variance, effectively regularization. Close to zero little regularization, larger values create more model flexiblity and noise tolerance.

  • gamma – For the RBF, Exponential, and Locally Periodic kernels, γ is essentially an inverse length scale. [0.1,1,10,100].

  • lambda – For the Periodic and Locally Periodic kernels, lambda_ determines the smoothness of the periodic function. A reasonable range might be [0.1,1,10,100].

  • lambda_prime – Specifically for the Locally Periodic kernel, this determines the smoothness of the periodic component. Same range as lambda_.

  • p – The period parameter for the Periodic and Locally Periodic kernels such as 7 or 365.25 for daily data.

fit(X, Y)
predict(X)
predict_proba(X)
class autots.models.sklearn.WindowRegression(name: str = 'WindowRegression', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2022, verbose: int = 0, window_size: int = 10, regression_model: dict = {'model': 'RandomForest', 'model_params': {}}, input_dim: str = 'univariate', output_dim: str = 'forecast_length', normalize_window: bool = False, shuffle: bool = False, forecast_length: int = 1, max_windows: int = 5000, fourier_encoding_components: float | None = None, scale: bool = False, datepart_method: str | None = None, regression_type: str | None = None, n_jobs: int = -1, **kwargs)

Bases: ModelObject

Regression use the last n values as the basis of training data.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (#) – str = None,

fit(df, future_regressor=None, static_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

fit_data(df, future_regressor=None)
get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int | None = None, future_regressor=None, just_point_forecast: bool = False, df=None)

Generate forecast data immediately following dates of .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.sklearn.generate_classifier_params(model_dict=None, method='default')
autots.models.sklearn.generate_regressor_params(model_dict=None, method='default')

Generate new parameters for input to regressor.

autots.models.sklearn.retrieve_classifier(regression_model: dict = {'model': 'RandomForest', 'model_params': {'bootstrap': False, 'min_samples_leaf': 1, 'n_estimators': 300}}, verbose: int = 0, verbose_bool: bool = False, random_seed: int = 2020, n_jobs: int = 1, multioutput: bool = True)

Convert a model param dict to model object for regression frameworks.

autots.models.sklearn.retrieve_regressor(regression_model: dict = {'model': 'RandomForest', 'model_params': {'bootstrap': False, 'min_samples_leaf': 1, 'n_estimators': 300}}, verbose: int = 0, verbose_bool: bool = False, random_seed: int = 2020, n_jobs: int = 1, multioutput: bool = True)

Convert a model param dict to model object for regression frameworks.

autots.models.sklearn.rolling_x_regressor(df, mean_rolling_periods: int = 30, macd_periods: int | None = None, std_rolling_periods: int = 7, max_rolling_periods: int | None = None, min_rolling_periods: int | None = None, quantile90_rolling_periods: int | None = None, quantile10_rolling_periods: int | None = None, ewm_alpha: float = 0.5, ewm_var_alpha: float | None = None, additional_lag_periods: int = 7, abs_energy: bool = False, rolling_autocorr_periods: int | None = None, nonzero_last_n: int | None = None, add_date_part: str | None = None, holiday: bool = False, holiday_country: str = 'US', polynomial_degree: int | None = None, window: int | None = None, cointegration: str | None = None, cointegration_lag: int = 1)

Generate more features from initial time series.

macd_periods ignored if mean_rolling is None.

Returns a dataframe of statistical features. Will need to be shifted by 1 or more to match Y for forecast. so for the index date of the output here, this represents the time of the prediction being made, NOT the target datetime. the datepart components should then represent the NEXT period ahead, which ARE the target datetime

autots.models.sklearn.rolling_x_regressor_regressor(df, mean_rolling_periods: int = 30, macd_periods: int | None = None, std_rolling_periods: int = 7, max_rolling_periods: int | None = None, min_rolling_periods: int | None = None, quantile90_rolling_periods: int | None = None, quantile10_rolling_periods: int | None = None, ewm_alpha: float = 0.5, ewm_var_alpha: float | None = None, additional_lag_periods: int = 7, abs_energy: bool = False, rolling_autocorr_periods: int | None = None, nonzero_last_n: int | None = None, add_date_part: str | None = None, holiday: bool = False, holiday_country: str = 'US', polynomial_degree: int | None = None, window: int | None = None, future_regressor=None, regressor_per_series=None, static_regressor=None, cointegration: str | None = None, cointegration_lag: int = 1, series_id=None, slice_index=None)

Adds in the future_regressor.

autots.models.statsmodels module

Statsmodels based forecasting models.

Statsmodels documentation can be a bit confusing. And it seems standard at first, but each model likes to do things differently. For example: exog, exog_oos, and exog_fc all sometimes mean the same thing

class autots.models.statsmodels.ARDL(name: str = 'ARDL', frequency: str = 'infer', prediction_interval: float = 0.9, lags: int = 1, trend: str = 'c', order: int = 0, causal: bool = False, regression_type: str = 'holiday', holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

ARDL from Statsmodels.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • lags (int) – lags 1 to max

  • trend (str) – n/c/t/ct

  • order (int) – 0 to max

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.ARIMA(name: str = 'ARIMA', frequency: str = 'infer', prediction_interval: float = 0.9, p: int = 0, d: int = 1, q: int = 0, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

ARIMA from Statsmodels.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • p (int) – is the number of autoregressive steps,

  • d (int) – is the number of differences needed for stationarity

  • q (int) – is the number of lagged forecast errors in the prediction.

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

  • n_jobs (int) – passed to joblib for multiprocessing. Set to none for context manager.

fit(df, future_regressor=None)

Train algorithm given data supplied .

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

large p,d,q can be very slow (a p of 30 can take hours)

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.DynamicFactor(name: str = 'DynamicFactor', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, k_factors: int = 1, factor_order: int = 0, **kwargs)

Bases: ModelObject

DynamicFactor from Statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or

if just_point_forecast == True, a dataframe of point forecasts

maModel = DynamicFactor(df_train, freq = ‘MS’, k_factors = 2, factor_order=2).fit() maPred = maModel.predict()

class autots.models.statsmodels.DynamicFactorMQ(name: str = 'DynamicFactorMQ', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, factors: int = 1, factor_orders: int = 2, factor_multiplicities: int | None = None, idiosyncratic_ar1: bool = False, maxiter: int = 1000, **kwargs)

Bases: ModelObject

DynamicFactorMQ from Statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.ETS(name: str = 'ETS', frequency: str = 'infer', prediction_interval: float = 0.9, damped_trend: bool = False, trend: str | None = None, seasonal: str | None = None, seasonal_periods: int | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Exponential Smoothing from Statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • damped_trend (bool) – passed through to statsmodel ETS (formerly just ‘damped’)

  • trend (str) – passed through to statsmodel ETS

  • seasonal (bool) – passed through to statsmodel ETS

  • seasonal_periods (int) – passed through to statsmodel ETS

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.GLM(name: str = 'GLM', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, regression_type: str | None = None, family='Gaussian', constant: bool = False, changepoint_spacing: int | None = None, verbose: int = 1, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Simple linear regression from statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.GLS(name: str = 'GLS', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, changepoint_spacing: int | None = None, changepoint_distance_end: int | None = None, constant: bool = False, **kwargs)

Bases: ModelObject

Simple linear regression from statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Returns dict of new parameters for parameter tuning

get_params()

Return dict of current parameters

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.Theta(name: str = 'Theta', frequency: str = 'infer', prediction_interval: float = 0.9, deseasonalize: bool = True, use_test: bool = True, difference: bool = False, period: int | None = None, theta: float = 2, use_mle: bool = False, method: str = 'auto', holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int | None = None, **kwargs)

Bases: ModelObject

Theta Model from Statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • statsmodels (params from Theta Model as per) –

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.UnobservedComponents(name: str = 'UnobservedComponents', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, n_jobs: int = 1, level: str = 'smooth trend', trend: bool = False, cycle: bool = False, damped_cycle: bool = False, irregular: bool = False, autoregressive: int | None = None, stochastic_cycle: bool = False, stochastic_trend: bool = False, stochastic_level: bool = False, maxiter: int = 100, cov_type: str = 'opg', method: str = 'lbfgs', model_kwargs: dict | None = None, **kwargs)

Bases: ModelObject

UnobservedComponents from Statsmodels.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • model_kwargs (dict) – additional model params to pass through underlying statsmodel

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast: bool = False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.VAR(name: str = 'VAR', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, maxlags: int = 15, ic: str = 'fpe', **kwargs)

Bases: ModelObject

VAR from Statsmodels.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.VARMAX(name: str = 'VARMAX', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, order: tuple = (1, 0), trend: str = 'c', **kwargs)

Bases: ModelObject

VARMAX from Statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

fit(df, future_regressor=None)

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generate forecast data immediately following dates of index supplied to .fit().

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.statsmodels.VECM(name: str = 'VECM', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, deterministic: str = 'n', k_ar_diff: int = 1, seasons: int = 0, coint_rank: int = 1, **kwargs)

Bases: ModelObject

VECM from Statsmodels

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

fit(df, future_regressor=None)

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=None, just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.statsmodels.arima_seek_the_oracle(current_series, args, series)
autots.models.statsmodels.glm_forecast_by_column(current_series, X, Xf, args)

Run one series of GLM and return prediction.

autots.models.tfp module

class autots.models.tfp.TFPRegression(name: str = 'TFPRegression', frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 1, kernel_initializer: str = 'lecun_uniform', optimizer: str = 'adam', loss: str = 'negloglike', epochs: int = 50, batch_size: int = 32, dist: str = 'normal', regression_type: str | None = None)

Bases: ModelObject

Tensorflow Probability regression.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’)

fit(df, future_regressor=[])

Train algorithm given data supplied

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=[], just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

class autots.models.tfp.TFPRegressor(kernel_initializer: str = 'lecun_uniform', optimizer: str = 'adam', loss: str = 'negloglike', epochs: int = 50, batch_size: int = 32, dist: str = 'normal', verbose: int = 1, random_seed: int = 2020)

Bases: object

Wrapper for Tensorflow Keras based RNN.

Parameters:
  • rnn_type (str) – Keras cell type ‘GRU’ or default ‘LSTM’

  • kernel_initializer (str) – passed to first keras LSTM or GRU layer

  • hidden_layer_sizes (tuple) – of len 1 or 3 passed to first keras LSTM or GRU layers

  • optimizer (str) – Passed to keras model.compile

  • loss (str) – Passed to keras model.compile

  • epochs (int) – Passed to keras model.fit

  • batch_size (int) – Passed to keras model.fit

  • verbose (int) – 0, 1 or 2. Passed to keras model.fit

  • random_seed (int) – passed to tf.random.set_seed()

fit(X, Y)

Train the model on dataframes of X and Y.

predict(X, conf_int: float | None = None)

Predict on dataframe of X.

class autots.models.tfp.TensorflowSTS(name: str = 'TensorflowSTS', frequency: str = 'infer', prediction_interval: float = 0.9, regression_type: str | None = None, holiday_country: str = 'US', random_seed: int = 2020, verbose: int = 0, trend: str = 'local', seasonal_periods: int | None = None, ar_order: int | None = None, fit_method: str = 'hmc', num_steps: int = 200)

Bases: ModelObject

STS from TensorflowProbability.

Parameters:
  • name (str) – String to identify class

  • frequency (str) – String alias of datetime index frequency or else ‘infer’

  • prediction_interval (float) – Confidence interval for probabilistic forecast

  • regression_type (str) – type of regression (None, ‘User’, or ‘Holiday’)

fit(df, future_regressor=[])

Train algorithm given data supplied.

Parameters:

df (pandas.DataFrame) – Datetime Indexed

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length: int, future_regressor=[], just_point_forecast=False)

Generates forecast data immediately following dates of index supplied to .fit()

Parameters:
  • forecast_length (int) – Number of periods of data to forecast ahead

  • regressor (numpy.Array) – additional regressor, not used

  • just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts

Returns:

Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts

autots.models.tide module

class autots.models.tide.TiDE(name: str = 'UnivariateRegression', random_seed=42, frequency='D', learning_rate=0.0009999, transform=False, layer_norm=False, holiday=True, dropout_rate=0.3, batch_size=512, hidden_size=256, num_layers=1, hist_len=720, decoder_output_dim=16, final_decoder_hidden=64, num_split=4, min_num_epochs=0, train_epochs=100, patience=10, epoch_len=None, permute=True, normalize=True, gpu_index=0, n_jobs: int = 'auto', forecast_length: int = 14, prediction_interval: float = 0.9, verbose: int = 0)

Bases: ModelObject

Google Research MLP based forecasting approach.

fit(df=None, num_cov_cols=None, cat_cov_cols=None, future_regressor=None)

Training TS code.

get_new_params(method: str = 'random')

Return dict of new parameters for parameter tuning.

get_params()

Return dict of current parameters.

predict(forecast_length='self', future_regressor=None, mode='test', just_point_forecast=False)
class autots.models.tide.TimeCovariates(datetimes, normalized=True, holiday=False)

Bases: object

Extract all time covariates except for holidays.

get_covariates()

Get all time covariates.

class autots.models.tide.TimeSeriesdata(df, num_cov_cols, cat_cov_cols, ts_cols, train_range, val_range, test_range, hist_len, pred_len, batch_size, freq='D', normalize=True, epoch_len=None, holiday=False, permute=True)

Bases: object

Data loader class.

test_val_gen(mode='val')

Generator for validation/test data.

tf_dataset(mode='train')

Tensorflow Dataset.

train_gen()

Generator for training data.

autots.models.tide.get_HOLIDAYS()
autots.models.tide.mae_loss(y_pred, y_true)
autots.models.tide.mape(y_pred, y_true)
autots.models.tide.nrmse(y_pred, y_true)
autots.models.tide.rmse(y_pred, y_true)
autots.models.tide.smape(y_pred, y_true)
autots.models.tide.wape(y_pred, y_true)

Module contents

Model Models