PipelineModel

class sagemaker.pipeline.PipelineModel(models, role, predictor_cls=None, name=None, vpc_config=None, sagemaker_session=None)

Bases: object

A pipeline of SageMaker Model``s that can be deployed to an ``Endpoint.

Initialize an SageMaker Model which can be used to build an Inference Pipeline comprising of multiple model containers.

Parameters:
  • models (list[sagemaker.Model]) – For using multiple containers to build an inference pipeline,
  • can pass a list of sagemaker.Model objects in the order you want the inference to happen. (you) –
  • role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.
  • predictor_cls (callable[string, sagemaker.session.Session]) – A function to call to create a predictor (default: None). If not None, deploy will return the result of invoking this function on the created endpoint name.
  • name (str) – The model name. If None, a default model name will be selected on each deploy.
  • vpc_config (dict[str, list[str]]) – The VpcConfig set on the model (default: None) * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
  • sagemaker_session (sagemaker.session.Session) – A SageMaker Session object, used for SageMaker interactions (default: None). If not specified, one is created using the default AWS configuration chain.
pipeline_container_def(instance_type)
Return a dict created by sagemaker.pipeline_container_def() for deploying this model to a specified
instance type.

Subclasses can override this to provide custom container definitions for deployment to a specific instance type. Called by deploy().

Parameters:instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’.
Returns:A list of container definition objects usable with the CreateModel API in the scenario of multiple containers (Inference Pipeline).
Return type:list[dict[str, str]]
deploy(initial_instance_count, instance_type, endpoint_name=None, tags=None)

Deploy this Model to an Endpoint and optionally return a Predictor.

Create a SageMaker Model and EndpointConfig, and deploy an Endpoint from this Model. If self.predictor_cls is not None, this method returns a the result of invoking self.predictor_cls on the created endpoint name.

The name of the created model is accessible in the name field of this Model after deploy returns

The name of the created endpoint is accessible in the endpoint_name field of this Model after deploy returns.

Parameters:
  • instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’.
  • initial_instance_count (int) – The initial number of instances to run in the Endpoint created from this Model.
  • endpoint_name (str) – The name of the endpoint to create (default: None). If not specified, a unique endpoint name will be created.
  • tags (List[dict[str, str]]) – The list of tags to attach to this specific endpoint.
Returns:

Invocation of self.predictor_cls on

the created endpoint name, if self.predictor_cls is not None. Otherwise, return None.

Return type:

callable[string, sagemaker.session.Session] or None