This module contains code to create and manage SageMaker MultiDataModel

class sagemaker.multidatamodel.MultiDataModel(name, model_data_prefix, model=None, image_uri=None, role=None, sagemaker_session=None, **kwargs)

Bases: sagemaker.model.Model

SageMaker MultiDataModel can be used to deploy multiple models to the same Endpoint.

And also deploy additional models to an existing SageMaker multi-model Endpoint

Initialize a MultiDataModel.

Addition to these arguments, it supports all arguments supported by Model constructor.

  • name (str) – The model name.

  • model_data_prefix (str) – The S3 prefix where all the models artifacts (.tar.gz) in a Multi-Model endpoint are located

  • model (sagemaker.Model) – The Model object that would define the SageMaker model attributes like vpc_config, predictors, etc. If this is present, the attributes from this model are used when deploying the MultiDataModel. Parameters ‘image_uri’, ‘role’ and ‘kwargs’ are not permitted when model parameter is set.

  • image_uri (str or PipelineVariable) – A Docker image URI. It can be null if the ‘model’ parameter is passed to during MultiDataModel initialization (default: None)

  • role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role if it needs to access some AWS resources. It can be null if this is being used to create a Model to pass to a PipelineModel which has its own Role field or if the ‘model’ parameter is passed to during MultiDataModel initialization (default: None)

  • sagemaker_session (sagemaker.session.Session) – A SageMaker Session object, used for SageMaker interactions (default: None). If not specified, one is created using the default AWS configuration chain.

  • **kwargs – Keyword arguments passed to the Model initializer.


You can find additional parameters for initializing this class at Model.

prepare_container_def(instance_type=None, accelerator_type=None, serverless_inference_config=None, accept_eula=None)

Return a container definition set.

Definition set includes MultiModel mode, model data and other parameters from the model (if available).

Subclasses can override this to provide custom container definitions for deployment to a specific instance type. Called by deploy().


A complete container definition object usable with the CreateModel API

Return type

dict[str, str]

deploy(initial_instance_count, instance_type, serializer=None, deserializer=None, accelerator_type=None, endpoint_name=None, tags=None, kms_key=None, wait=True, data_capture_config=None, **kwargs)

Deploy this Model to an Endpoint and optionally return a Predictor.

Create a SageMaker Model and EndpointConfig, and deploy an Endpoint from this Model. If self.model is not None, then the Endpoint will be deployed with parameters in self.model (like vpc_config, enable_network_isolation, etc). If self.model is None, then use the parameters in MultiDataModel constructor will be used. If self.predictor_cls is not None, this method returns a the result of invoking self.predictor_cls on the created endpoint name.

The name of the created model is accessible in the name field of this Model after deploy returns

The name of the created endpoint is accessible in the endpoint_name field of this Model after deploy returns.

  • initial_instance_count (int) – The initial number of instances to run in the Endpoint created from this Model.

  • instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’, or ‘local’ for local mode.

  • serializer (BaseSerializer) – A serializer object, used to encode data for an inference endpoint (default: None). If serializer is not None, then serializer will override the default serializer. The default serializer is set by the predictor_cls.

  • deserializer (BaseDeserializer) – A deserializer object, used to decode data from an inference endpoint (default: None). If deserializer is not None, then deserializer will override the default deserializer. The default deserializer is set by the predictor_cls.

  • accelerator_type (str) – Type of Elastic Inference accelerator to deploy this model for model loading and inference, for example, ‘ml.eia1.medium’. If not specified, no Elastic Inference accelerator will be attached to the endpoint. For more information:

  • endpoint_name (str) – The name of the endpoint to create (default: None). If not specified, a unique endpoint name will be created.

  • tags (List[dict[str, str]]) – The list of tags to attach to this specific endpoint.

  • kms_key (str) – The ARN of the KMS key that is used to encrypt the data on the storage volume attached to the instance hosting the endpoint.

  • wait (bool) – Whether the call should wait until the deployment of this model completes (default: True).

  • data_capture_config (sagemaker.model_monitor.DataCaptureConfig) – Specifies configuration related to Endpoint data capture for use with Amazon SageMaker Model Monitoring. Default: None.


Invocation of

self.predictor_cls on the created endpoint name, if self.predictor_cls is not None. Otherwise, return None.

Return type

callable[string, sagemaker.session.Session] or None

add_model(model_data_source, model_data_path=None)

Adds a model to the MultiDataModel.

It is done by uploading or copying the model_data_source artifact to the given S3 path model_data_path relative to model_data_prefix

  • model_source – Valid local file path or S3 path of the trained model artifact

  • model_data_path – S3 path where the trained model artifact should be uploaded relative to self.model_data_prefix path. (default: None). If None, then the model artifact is uploaded to a path relative to model_data_prefix


S3 uri to uploaded model artifact

Return type



Generates and returns relative paths to model archives.

Archives are stored at model_data_prefix S3 location.

Yields: Paths to model archives relative to model_data_prefix path.