Transformer

class sagemaker.transformer.Transformer(model_name, instance_count, instance_type, strategy=None, assemble_with=None, output_path=None, output_kms_key=None, accept=None, max_concurrent_transforms=None, max_payload=None, tags=None, env=None, base_transform_job_name=None, sagemaker_session=None, volume_kms_key=None)

Bases: object

A class for handling creating and interacting with Amazon SageMaker transform jobs.

Initialize a Transformer.

Parameters
  • model_name (str or PipelineVariable) – Name of the SageMaker model being used for the transform job.

  • instance_count (int or PipelineVariable) – Number of EC2 instances to use.

  • instance_type (str or PipelineVariable) – Type of EC2 instance to use, for example, ‘ml.c4.xlarge’.

  • strategy (str or PipelineVariable) – The strategy used to decide how to batch records in a single request (default: None). Valid values: ‘MultiRecord’ and ‘SingleRecord’.

  • assemble_with (str or PipelineVariable) – How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.

  • output_path (str or PipelineVariable) – S3 location for saving the transform result. If not specified, results are stored to a default bucket.

  • output_kms_key (str or PipelineVariable) – Optional. KMS key ID for encrypting the transform output (default: None).

  • accept (str or PipelineVariable) – The accept header passed by the client to the inference endpoint. If it is supported by the endpoint, it will be the format of the batch transform output.

  • max_concurrent_transforms (int or PipelineVariable) – The maximum number of HTTP requests to be made to each individual transform container at one time.

  • max_payload (int or PipelineVariable) – Maximum size of the payload in a single HTTP request to the container in MB.

  • tags (Optional[Tags]) – Tags for labeling a transform job (default: None). For more, see the SageMaker API documentation for Tag.

  • env (dict[str, str] or dict[str, PipelineVariable]) – Environment variables to be set for use during the transform job (default: None).

  • base_transform_job_name (str) – Prefix for the transform job when the transform() method launches. If not specified, a default prefix will be generated based on the training image name that was used to train the model associated with the transform job.

  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.

  • volume_kms_key (str or PipelineVariable) – Optional. KMS key ID for encrypting the volume attached to the ML compute instance (default: None).

JOB_CLASS_NAME = 'transform-job'
transform(data, data_type='S3Prefix', content_type=None, compression_type=None, split_type=None, job_name=None, input_filter=None, output_filter=None, join_source=None, experiment_config=None, model_client_config=None, batch_data_capture_config=None, wait=True, logs=True)

Start a new transform job.

Parameters
  • data (str or PipelineVariable) – Input data location in S3.

  • data_type (str or PipelineVariable) –

    What the S3 location defines (default: ‘S3Prefix’). Valid values:

    • ’S3Prefix’ - the S3 URI defines a key name prefix. All objects with this prefix

      will be used as inputs for the transform job.

    • ’ManifestFile’ - the S3 URI points to a single manifest file listing each S3

      object to use as an input for the transform job.

  • content_type (str or PipelineVariable) – MIME type of the input data (default: None).

  • compression_type (str or PipelineVariable) – Compression type of the input data, if compressed (default: None). Valid values: ‘Gzip’, None.

  • split_type (str or PipelineVariable) – The record delimiter for the input object (default: ‘None’). Valid values: ‘None’, ‘Line’, ‘RecordIO’, and ‘TFRecord’.

  • job_name (str) – job name (default: None). If not specified, one will be generated.

  • input_filter (str or PipelineVariable) – A JSONPath to select a portion of the input to pass to the algorithm container for inference. If you omit the field, it gets the value ‘$’, representing the entire input. For CSV data, each row is taken as a JSON array, so only index-based JSONPaths can be applied, e.g. $[0], $[1:]. CSV data should follow the RFC format. See Supported JSONPath Operators for a table of supported JSONPath operators. For more information, see the SageMaker API documentation for CreateTransformJob. Some examples: “$[1:]”, “$.features” (default: None).

  • output_filter (str or PipelineVariable) –

    A JSONPath to select a portion of the joined/original output to return as the output. For more information, see the SageMaker API documentation for CreateTransformJob. Some examples: “$[1:]”, “$.prediction” (default: None).

  • join_source (str or PipelineVariable) – The source of data to be joined to the transform output. It can be set to ‘Input’ meaning the entire input record will be joined to the inference result. You can use OutputFilter to select the useful portion before uploading to S3. (default: None). Valid values: Input, None.

  • experiment_config (dict[str, str]) – Experiment management configuration. Optionally, the dict can contain three keys: ‘ExperimentName’, ‘TrialName’, and ‘TrialComponentDisplayName’. The behavior of setting these keys is as follows: * If ExperimentName is supplied but TrialName is not a Trial will be automatically created and the job’s Trial Component associated with the Trial. * If TrialName is supplied and the Trial already exists the job’s Trial Component will be associated with the Trial. * If both ExperimentName and TrialName are not supplied the trial component will be unassociated. * TrialComponentDisplayName is used for display in Studio. * Both ExperimentName and TrialName will be ignored if the Transformer instance is built with PipelineSession. However, the value of TrialComponentDisplayName is honored for display in Studio.

  • model_client_config (dict[str, str] or dict[str, PipelineVariable]) – Model configuration. Dictionary contains two optional keys, ‘InvocationsTimeoutInSeconds’, and ‘InvocationsMaxRetries’. (default: None).

  • batch_data_capture_config (BatchDataCaptureConfig) – Configuration object which specifies the configurations related to the batch data capture for the transform job (default: None).

  • batch_data_capture_config – Configuration object which specifies the configurations related to the batch data capture for the transform job (default: None).

  • wait (bool) – Whether the call should wait until the job completes (default: True).

  • logs (bool) – Whether to show the logs produced by the job. Only meaningful when wait is True (default: True).

Returns

None or pipeline step arguments in case the Transformer instance is built with PipelineSession

transform_with_monitoring(monitoring_config, monitoring_resource_config, data, data_type='S3Prefix', content_type=None, compression_type=None, split_type=None, input_filter=None, output_filter=None, join_source=None, model_client_config=None, batch_data_capture_config=None, monitor_before_transform=False, supplied_baseline_statistics=None, supplied_baseline_constraints=None, wait=True, pipeline_name=None, role=None, fail_on_violation=True)

Runs a transform job with monitoring job.

Note that this function will not start a transform job immediately, instead, it will create a SageMaker Pipeline and execute it. If you provide an existing pipeline_name, no new pipeline will be created, otherwise, each transform_with_monitoring call will create a new pipeline and execute.

Parameters
  • (Union[ (monitoring_config) – sagemaker.workflow.quality_check_step.QualityCheckConfig, sagemaker.workflow.quality_check_step.ClarifyCheckConfig

  • ]) – the monitoring configuration used for run model monitoring.

  • monitoring_resource_config (sagemaker.workflow.check_job_config.CheckJobConfig) – the check job (processing job) cluster resource configuration.

  • data (str) – Input data location in S3 for the transform job

  • data_type (str) – What the S3 location defines (default: ‘S3Prefix’). Valid values: * ‘S3Prefix’ - the S3 URI defines a key name prefix. All objects with this prefix will be used as inputs for the transform job. * ‘ManifestFile’ - the S3 URI points to a single manifest file listing each S3 object to use as an input for the transform job.

  • content_type (str) – MIME type of the input data (default: None).

  • compression_type (str) – Compression type of the input data, if compressed (default: None). Valid values: ‘Gzip’, None.

  • split_type (str) – The record delimiter for the input object (default: ‘None’). Valid values: ‘None’, ‘Line’, ‘RecordIO’, and ‘TFRecord’.

  • input_filter (str) –

    A JSONPath to select a portion of the input to pass to the algorithm container for inference. If you omit the field, it gets the value ‘$’, representing the entire input. For CSV data, each row is taken as a JSON array, so only index-based JSONPaths can be applied, e.g. $[0], $[1:]. CSV data should follow the RFC format. See Supported JSONPath Operators for a table of supported JSONPath operators. For more information, see the SageMaker API documentation for CreateTransformJob. Some examples: “$[1:]”, “$.features” (default: None).

  • output_filter (str) –

    A JSONPath to select a portion of the joined/original output to return as the output. For more information, see the SageMaker API documentation for CreateTransformJob. Some examples: “$[1:]”, “$.prediction” (default: None).

  • join_source (str) – The source of data to be joined to the transform output. It can be set to ‘Input’ meaning the entire input record will be joined to the inference result. You can use OutputFilter to select the useful portion before uploading to S3. (default: None). Valid values: Input, None.

  • model_client_config (dict[str, str]) – Model configuration. Dictionary contains two optional keys, ‘InvocationsTimeoutInSeconds’, and ‘InvocationsMaxRetries’. (default: None).

  • batch_data_capture_config (BatchDataCaptureConfig) – Configuration object which specifies the configurations related to the batch data capture for the transform job (default: None).

  • monitor_before_transform (bgool) – If to run data quality or model explainability monitoring type, a true value of this flag indicates running the check step before the transform job.

  • supplied_baseline_statistics (Union[str, PipelineVariable]) – The S3 path to the supplied statistics object representing the statistics JSON file which will be used for drift to check (default: None).

  • supplied_baseline_constraints (Union[str, PipelineVariable]) – The S3 path to the supplied constraints object representing the constraints JSON file which will be used for drift to check (default: None).

  • wait (bool) – To determine if needed to wait for the pipeline execution to complete

  • pipeline_name (str) – The name of the Pipeline for the monitoring and transfrom step

  • role (str) – Execution role

  • fail_on_violation (Union[bool, PipelineVariable]) – A opt-out flag to not to fail the check step when a violation is detected.

delete_model()

Delete the corresponding SageMaker model for this Transformer.

wait(logs=True)

Placeholder docstring

stop_transform_job(wait=True)

Stop latest running batch transform job.

classmethod attach(transform_job_name, sagemaker_session=None)

Attach an existing transform job to a new Transformer instance

Parameters
  • transform_job_name (str) – Name for the transform job to be attached.

  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, one will be created using the default AWS configuration chain.

Returns

The Transformer instance with the specified transform job attached.

Return type

sagemaker.transformer.Transformer