Distributed model parallel

The Amazon SageMaker distributed model parallel library is a model parallelism library for training large deep learning models that were previously difficult to train due to GPU memory limitations. The library automatically and efficiently splits a model across multiple GPUs and instances and coordinates model training, allowing you to increase prediction accuracy by creating larger models with more parameters.

You can use the library to automatically partition your existing TensorFlow and PyTorch workloads across multiple GPUs with minimal code changes. The library’s API can be accessed through the Amazon SageMaker SDK.

Use the following sections to learn more about the model parallelism and the library.

Use with the SageMaker Python SDK

Use the following page to learn how to configure and enable distributed model parallel when you configure an Amazon SageMaker Python SDK Estimator.

API Documentation

The library contains a Common API that is shared across frameworks, as well as APIs that are specific to supported frameworks, TensorFlow and PyTorch.

Select a version to see the API documentation for version. To use the library, reference the Common API documentation alongside the framework specific API documentation.

It is recommended to use this documentation alongside SageMaker Distributed Model Parallel in the Amazon SageMaker developer guide. This developer guide documentation includes:

Important

The model parallel library only supports training jobs using CUDA 11. When you define a PyTorch or TensorFlow Estimator with modelparallel parameter enabled set to True, it uses CUDA 11. When you extend or customize your own training image you must use a CUDA 11 base image. See Extend or Adapt A Docker Container that Contains the Model Parallel Library for more information.

Release Notes

New features, bug fixes, and improvements are regularly made to the SageMaker distributed model parallel library.