Server data from the Official MCP Registry
Automated time series forecasting with model search, anomaly detection, and event risk analysis
Automated time series forecasting with model search, anomaly detection, and event risk analysis
Valid MCP server (1 strong, 1 medium validity signals). 3 known CVEs in dependencies (0 critical, 1 high severity) ⚠️ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry.
5 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-winedarksea-autots": {
"args": [
"autots"
],
"command": "uvx"
}
}
}From the project's GitHub README.
AutoTS is a time series package for Python designed for rapidly deploying high-accuracy forecasts at scale.
In 2023, AutoTS won in the M6 forecasting competition, delivering the highest performance investment decisions across 12 months of stock market forecasting.
There are dozens of forecasting models usable in the sklearn style of .fit() and .predict().
These includes naive, statistical, machine learning, and deep learning models.
Additionally, there are over 30 time series specific transforms usable in the sklearn style of .fit(), .transform() and .inverse_transform().
All of these function directly on Pandas Dataframes, without the need for conversion to proprietary objects.
All models support forecasting multivariate (multiple time series) outputs and also support probabilistic (upper/lower bound) forecasts. Most models can readily scale to tens and even hundreds of thousands of input series. Many models also support passing in user-defined exogenous regressors.
These models are all designed for integration in an AutoML feature search which automatically finds the best models, preprocessing, and ensembling for a given dataset through genetic algorithms.
Horizontal and mosaic style ensembles are the flagship ensembling types, allowing each series to receive the most accurate possible models while still maintaining scalability.
A combination of metrics and cross-validation options, the ability to apply subsets and weighting, regressor generation tools, simulation forecasting mode, event risk forecasting, live datasets, template import and export, plotting, and a collection of data shaping parameters round out the available feature set.
pip install autots
This includes dependencies for basic models, but additonal packages are required for some models and methods.
Be advised there are several other projects that have chosen similar names, so make sure you are on the right AutoTS code, papers, and documentation.
Input data for AutoTS is expected to come in either a long or a wide format:
pandas.DataFrame with a pandas.DatetimeIndex and each column a distinct series.datetime format)= None..fit() as date_col, id_col, and value_col. No parameters are needed for wide data.Lower-level functions are only designed for wide style data.
# also load: _hourly, _monthly, _weekly, _yearly, or _live_daily
from autots import AutoTS, load_daily
# sample datasets can be used in either of the long or wide import shapes
long = False
df = load_daily(long=long)
model = AutoTS(
forecast_length=21,
frequency="infer",
prediction_interval=0.9,
ensemble=None,
model_list="superfast", # "fast", "default", "fast_parallel"
transformer_list="fast", # "superfast",
drop_most_recent=1,
max_generations=4,
num_validations=2,
validation_method="backwards"
)
model = model.fit(
df,
date_col='datetime' if long else None,
value_col='value' if long else None,
id_col='series_id' if long else None,
)
prediction = model.predict()
# plot a sample
prediction.plot(model.df_wide_numeric,
series=model.df_wide_numeric.columns[0],
start_date="2019-01-01")
# Print the details of the best model
print(model)
# point forecasts dataframe
forecasts_df = prediction.forecast
# upper and lower forecasts
forecasts_up, forecasts_low = prediction.upper_forecast, prediction.lower_forecast
# accuracy of all tried model results
model_results = model.results()
# and aggregated from cross validation
validation_results = model.results("validation")
The lower-level API, in particular the large section of time series transformers in the scikit-learn style, can also be utilized independently from the AutoML framework.
Check out extended_tutorial.md for a more detailed guide to features.
Also take a look at the production_example.py
superfast (simple naive models) and fast (more complex but still faster models, optimized for many series)fast_parallel (a combination of fast and parallel) or parallel, given many CPU cores are available
n_jobs usually gets pretty close with ='auto' but adjust as necessary for the environmentfrom autots.models.model_list import model_listssubset parameter when there are many similar series, subset=100 will often generalize well for tens of thousands of similar series.
subset, passing weights for series will weight subset selection towards higher priority series.model_interrupt=True to skip only the current model when you hit Ctrl+C. Tap Ctrl+C a second time within 1.5 seconds to end the entire run, or pass something like model_interrupt={"mode": "skip", "double_press_window": 1.2} to tighten/loosen the window.result_file method of .fit() which will save progress after each generation - helpful to save progress if a long training is being done. Use import_results to recover.transformer_max_depth to a lower number (say, 2) will increase speed. Also utilize transformer_list == 'fast' or 'superfast'.ensemble='horizontal-max' with model_list='no_shared_fast' can scale relatively well given many cpu cores because each model is only run on the series it is needed for.num_validations and models_to_validate will decrease runtime but may lead to poorer model selections.frequency and aggfunc but is probably best done before passing data into AutoTS.runtime_weighting in metric_weighting to a higher value. This will guide the search towards faster models, although it may come at the expense of accuracy.See the README.md in ./autots/mcp. Note install with pip install autots[mcp] for full dependencies, or the equivalent pip install autots-mcp.
{
"mcpServers": {
"autots": {
"command": "autots-mcp"
}
}
}
mcp-name: io.github.winedarksea/AutoTS
flowchart TD
A[Initiate AutoTS Model] --> B[Import Template]
B --> C[Load Data]
C --> D[Split Data Into Initial Train/Test Holdout]
D --> E[Run Initial Template Models]
E --> F[Evaluate Accuracy Metrics on Results]
F --> G[Generate Score from Accuracy Metrics]
G --> H{Max Generations Reached or Timeout?}
H -->|No| I[Evaluate All Previous Templates]
I --> J[Genetic Algorithm Combines Best Results and New Random Parameters into New Template]
J --> K[Run New Template Models and Evaluate]
K --> G
H -->|Yes| L[Select Best Models by Score for Validation Template]
L --> M[Run Validation Template on Additional Holdouts]
M --> N[Evaluate and Score Validation Results]
N --> O{Create Ensembles?}
O -->|Yes| P[Generate Ensembles from Validation Results]
P --> Q[Run Ensembles Through Validation]
Q --> N
O -->|No| R[Export Best Models Template]
R --> S[Select Single Best Model]
S --> T[Generate Future Time Forecast]
T --> U[Visualize Results]
R --> B[Import Best Models Template]
If you wish to cite AutoTS in an academic work, the following paper may be used.
Colin Catlin, Adaptive forecasting in dynamic markets: An evaluation of AutoTS within the M6 competition, International Journal of Forecasting, Volume 41, Issue 4, 2025, Pages 1485-1493, ISSN 0169-2070, https://doi.org/10.1016/j.ijforecast.2025.08.004.
Also known as Project CATS (Catlin's Automated Time Series) hence the logo.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.