class sagemaker.transformer.Transformer(model_name: Union[str, sagemaker.workflow.entities.PipelineVariable], instance_count: Union[int, sagemaker.workflow.entities.PipelineVariable], instance_type: Union[str, sagemaker.workflow.entities.PipelineVariable], strategy: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, assemble_with: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, output_path: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, output_kms_key: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, accept: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, max_concurrent_transforms: Optional[Union[int, sagemaker.workflow.entities.PipelineVariable]] = None, max_payload: Optional[Union[int, sagemaker.workflow.entities.PipelineVariable]] = None, tags: Optional[List[Dict[str, Union[str, sagemaker.workflow.entities.PipelineVariable]]]] = None, env: Optional[Dict[str, Union[str, sagemaker.workflow.entities.PipelineVariable]]] = None, base_transform_job_name: Optional[str] = None, sagemaker_session: Optional[sagemaker.session.Session] = None, volume_kms_key: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None)

Bases: object

A class for handling creating and interacting with Amazon SageMaker transform jobs.

Initialize a Transformer.

  • model_name (str) – Name of the SageMaker model being used for the transform job.

  • instance_count (int) – Number of EC2 instances to use.

  • instance_type (str) – Type of EC2 instance to use, for example, ‘ml.c4.xlarge’.

  • strategy (str) – The strategy used to decide how to batch records in a single request (default: None). Valid values: ‘MultiRecord’ and ‘SingleRecord’.

  • assemble_with (str) – How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.

  • output_path (str) – S3 location for saving the transform result. If not specified, results are stored to a default bucket.

  • output_kms_key (str) – Optional. KMS key ID for encrypting the transform output (default: None).

  • accept (str) – The accept header passed by the client to the inference endpoint. If it is supported by the endpoint, it will be the format of the batch transform output.

  • max_concurrent_transforms (int) – The maximum number of HTTP requests to be made to each individual transform container at one time.

  • max_payload (int) – Maximum size of the payload in a single HTTP request to the container in MB.

  • tags (list[dict]) – List of tags for labeling a transform job (default: None). For more, see the SageMaker API documentation for Tag.

  • env (dict) – Environment variables to be set for use during the transform job (default: None).

  • base_transform_job_name (str) – Prefix for the transform job when the transform() method launches. If not specified, a default prefix will be generated based on the training image name that was used to train the model associated with the transform job.

  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.

  • volume_kms_key (str) – Optional. KMS key ID for encrypting the volume attached to the ML compute instance (default: None).

JOB_CLASS_NAME = 'transform-job'
transform(data: Union[str, sagemaker.workflow.entities.PipelineVariable], data_type: Union[str, sagemaker.workflow.entities.PipelineVariable] = 'S3Prefix', content_type: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, compression_type: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, split_type: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, job_name: Optional[str] = None, input_filter: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, output_filter: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, join_source: Optional[Union[str, sagemaker.workflow.entities.PipelineVariable]] = None, experiment_config: Optional[Dict[str, str]] = None, model_client_config: Optional[Dict[str, Union[str, sagemaker.workflow.entities.PipelineVariable]]] = None, wait: bool = True, logs: bool = True)

Start a new transform job.

  • data (str) – Input data location in S3.

  • data_type (str) –

    What the S3 location defines (default: ‘S3Prefix’). Valid values:

    • ’S3Prefix’ - the S3 URI defines a key name prefix. All objects with this prefix

      will be used as inputs for the transform job.

    • ’ManifestFile’ - the S3 URI points to a single manifest file listing each S3

      object to use as an input for the transform job.

  • content_type (str) – MIME type of the input data (default: None).

  • compression_type (str) – Compression type of the input data, if compressed (default: None). Valid values: ‘Gzip’, None.

  • split_type (str) – The record delimiter for the input object (default: ‘None’). Valid values: ‘None’, ‘Line’, ‘RecordIO’, and ‘TFRecord’.

  • job_name (str) – job name (default: None). If not specified, one will be generated.

  • input_filter (str) – A JSONPath to select a portion of the input to pass to the algorithm container for inference. If you omit the field, it gets the value ‘$’, representing the entire input. For CSV data, each row is taken as a JSON array, so only index-based JSONPaths can be applied, e.g. $[0], $[1:]. CSV data should follow the RFC format. See Supported JSONPath Operators for a table of supported JSONPath operators. For more information, see the SageMaker API documentation for CreateTransformJob. Some examples: “$[1:]”, “$.features” (default: None).

  • output_filter (str) –

    A JSONPath to select a portion of the joined/original output to return as the output. For more information, see the SageMaker API documentation for CreateTransformJob. Some examples: “$[1:]”, “$.prediction” (default: None).

  • join_source (str) – The source of data to be joined to the transform output. It can be set to ‘Input’ meaning the entire input record will be joined to the inference result. You can use OutputFilter to select the useful portion before uploading to S3. (default: None). Valid values: Input, None.

  • experiment_config (dict[str, str]) – Experiment management configuration. Optionally, the dict can contain three keys: ‘ExperimentName’, ‘TrialName’, and ‘TrialComponentDisplayName’. The behavior of setting these keys is as follows: * If ExperimentName is supplied but TrialName is not a Trial will be automatically created and the job’s Trial Component associated with the Trial. * If TrialName is supplied and the Trial already exists the job’s Trial Component will be associated with the Trial. * If both ExperimentName and TrialName are not supplied the trial component will be unassociated. * TrialComponentDisplayName is used for display in Studio. * Both ExperimentName and TrialName will be ignored if the Transformer instance is built with PipelineSession. However, the value of TrialComponentDisplayName is honored for display in Studio.

  • model_client_config (dict[str, str]) – Model configuration. Dictionary contains two optional keys, ‘InvocationsTimeoutInSeconds’, and ‘InvocationsMaxRetries’. (default: None).

  • wait (bool) – Whether the call should wait until the job completes (default: True).

  • logs (bool) – Whether to show the logs produced by the job. Only meaningful when wait is True (default: True).


None or pipeline step arguments in case the Transformer instance is built with PipelineSession


Delete the corresponding SageMaker model for this Transformer.


Placeholder docstring


Stop latest running batch transform job.

classmethod attach(transform_job_name, sagemaker_session=None)

Attach an existing transform job to a new Transformer instance

  • transform_job_name (str) – Name for the transform job to be attached.

  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, one will be created using the default AWS configuration chain.


The Transformer instance with the specified transform job attached.

Return type