Amazon SageMaker Python SDK¶
Amazon SageMaker Python SDK is an open source library for training and deploying machine-learned models on Amazon SageMaker.
With the SDK, you can train and deploy models using popular deep learning frameworks, algorithms provided by Amazon, or your own algorithms built into SageMaker-compatible Docker images.
Here you’ll find an overview and API documentation for SageMaker Python SDK. The project homepage is in Github: https://github.com/aws/sagemaker-python-sdk, where you can find the SDK source and installation instructions for the library.
- Using the SageMaker Python SDK
- Train a Model with the SageMaker Python SDK
- Using Models Trained Outside of Amazon SageMaker
- SageMaker Automatic Model Tuning
- SageMaker Serverless Inference
- SageMaker Batch Transform
- Local Mode
- Secure Training and Inference with VPC
- Secure Training with Network Isolation (Internet-Free) Mode
- Inference Pipelines
- SageMaker Workflow
- SageMaker Model Monitoring
- SageMaker Debugger
- SageMaker Processing
- Use Version 2.x of the SageMaker Python SDK
The SageMaker Python SDK APIs:
The SageMaker Python SDK supports managed training and inference for a variety of machine learning frameworks:
SageMaker First-Party Algorithms¶
Amazon SageMaker provides implementations of some common machine learning algorithms optimized for GPU architecture and massive datasets.
Orchestrate your SageMaker training and inference workflows with Airflow and Kubernetes.
Amazon SageMaker Debugger¶
You can use Amazon SageMaker Debugger to automatically detect anomalies while training your machine learning models.
- Amazon SageMaker Debugger
Amazon SageMaker Feature Store¶
You can use Feature Store to store features and associated metadata, so features can be discovered and reused.
Amazon SageMaker Model Monitoring¶
You can use Amazon SageMaker Model Monitoring to automatically detect concept drift by monitoring your machine learning models.
Amazon SageMaker Processing¶
You can use Amazon SageMaker Processing to perform data processing tasks such as data pre- and post-processing, feature engineering, data validation, and model evaluation