Algorithm Estimator

Test docstring

class sagemaker.algorithm.AlgorithmEstimator(algorithm_arn, role=None, instance_count=None, instance_type=None, volume_size=30, volume_kms_key=None, max_run=86400, input_mode='File', output_path=None, output_kms_key=None, base_job_name=None, sagemaker_session=None, hyperparameters=None, tags=None, subnets=None, security_group_ids=None, model_uri=None, model_channel_name='model', metric_definitions=None, encrypt_inter_container_traffic=False, use_spot_instances=False, max_wait=None, **kwargs)

Bases: EstimatorBase

A generic Estimator to train using any algorithm object (with an algorithm_arn).

The Algorithm can be your own, or any Algorithm from AWS Marketplace that you have a valid subscription for. This class will perform client-side validation on all the inputs.

Initialize an AlgorithmEstimator instance.

Parameters
  • algorithm_arn (str) – algorithm arn used for training. Can be just the name if your account owns the algorithm.

  • role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIsthat create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.

  • instance_count (int or PipelineVariable) – Number of Amazon EC2 instances to use for training.

  • instance_type (str or PipelineVariable) – Type of EC2 instance to use for training, for example, ‘ml.c4.xlarge’.

  • volume_size (int or PipelineVariable) – Size in GB of the EBS volume to use for storing input data during training (default: 30). Must be large enough to store training data if File Mode is used (which is the default).

  • volume_kms_key (str or PipelineVariable) – Optional. KMS key ID for encrypting EBS volume attached to the training instance (default: None).

  • max_run (int or PipelineVariable) – Timeout in seconds for training (default: 24 * 60 * 60). After this amount of time Amazon SageMaker terminates the job regardless of its current status.

  • input_mode (str or PipelineVariable) –

    The input mode that the algorithm supports (default: ‘File’). Valid modes:

    • ’File’ - Amazon SageMaker copies the training dataset from the S3 location to a local directory.

    • ’Pipe’ - Amazon SageMaker streams data directly from S3 to the container via a Unix-named pipe.

    This argument can be overriden on a per-channel basis using sagemaker.inputs.TrainingInput.input_mode.

  • output_path (str or PipelineVariable) – S3 location for saving the training result (model artifacts and output files). If not specified, results are stored to a default bucket. If the bucket with the specific name does not exist, the estimator creates the bucket during the fit() method execution.

  • output_kms_key (str or PipelineVariable) – Optional. KMS key ID for encrypting the training output (default: None). base_job_name (str): Prefix for training job name when the fit() method launches. If not specified, the estimator generates a default job name, based on the training image name and current timestamp.

  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.

  • tags (Union[Tags]) – Tags for labeling a training job. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.

  • subnets (list[str] or list[PipelineVariable]) – List of subnet ids. If not specified training job will be created without VPC config. security_group_ids (list[str]): List of security group ids. If not specified training job will be created without VPC config.

  • model_uri (str) – URI where a pre-trained model is stored, either locally or in S3 (default: None). If specified, the estimator will create a channel pointing to the model so the training job can download it. This model can be a ‘model.tar.gz’ from a previous training job, or other artifacts coming from a different source. More information: https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html#td-deserialization

  • model_channel_name (str or PipelineVariable) – Name of the channel where ‘model_uri’ will be downloaded (default: ‘model’). metric_definitions (list[dict]): A list of dictionaries that defines the metric(s) used to evaluate the training jobs. Each dictionary contains two keys: ‘Name’ for the name of the metric, and ‘Regex’ for the regular expression used to extract the metric from the logs.

  • encrypt_inter_container_traffic (bool or PipelineVariable) – Specifies whether traffic between training containers is encrypted for the training job (default: False).

  • use_spot_instances (bool or PipelineVariable) –

    Specifies whether to use SageMaker Managed Spot instances for training. If enabled then the max_wait arg should also be set.

    More information: https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html (default: False).

  • max_wait (int or PipelineVariable) – Timeout in seconds waiting for spot training instances (default: None). After this amount of time Amazon SageMaker will stop waiting for Spot instances to become available (default: None).

  • **kwargs – Additional kwargs. This is unused. It’s only added for AlgorithmEstimator to ignore the irrelevant arguments.

  • base_job_name (Optional[str]) –

  • hyperparameters (Optional[Dict[str, Union[str, PipelineVariable]]]) –

  • security_group_ids (Optional[List[Union[str, PipelineVariable]]]) –

  • metric_definitions (Optional[List[Dict[str, Union[str, PipelineVariable]]]]) –

validate_train_spec()

Placeholder docstring

set_hyperparameters(**kwargs)

Placeholder docstring

hyperparameters()

Returns the hyperparameters as a dictionary to use for training.

The fit() method, that does the model training, calls this method to find the hyperparameters you specified.

training_image_uri()

Returns the docker image to use for training.

The fit() method, that does the model training, calls this method to find the image to use for model training.

enable_network_isolation()

Return True if this Estimator will need network isolation to run.

On Algorithm Estimators this depends on the algorithm being used. If this is algorithm owned by your account it will be False. If this is an an algorithm consumed from Marketplace it will be True.

Returns

Whether this Estimator needs network isolation or not.

Return type

bool

create_model(role=None, predictor_cls=None, serializer=<sagemaker.base_serializers.IdentitySerializer object>, deserializer=<sagemaker.base_deserializers.BytesDeserializer object>, vpc_config_override='VPC_CONFIG_DEFAULT', **kwargs)

Create a model to deploy.

The serializer and deserializer are only used to define a default Predictor. They are ignored if an explicit predictor class is passed in. Other arguments are passed through to the Model class.

Parameters
  • role (str) – The ExecutionRoleArn IAM Role ARN for the Model, which is also used during transform jobs. If not specified, the role from the Estimator will be used.

  • predictor_cls (Predictor) – The predictor class to use when deploying the model.

  • serializer (BaseSerializer) – A serializer object, used to encode data for an inference endpoint (default: IdentitySerializer).

  • deserializer (BaseDeserializer) – A deserializer object, used to decode data from an inference endpoint (default: BytesDeserializer).

  • vpc_config_override (dict[str, list[str]]) – Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.

  • **kwargs – Additional arguments for creating a ModelPackage.

Tip

You can find additional parameters for using this method at ModelPackage and Model.

Returns

a Model ready for deployment.

transformer(instance_count, instance_type, strategy=None, assemble_with=None, output_path=None, output_kms_key=None, accept=None, env=None, max_concurrent_transforms=None, max_payload=None, tags=None, role=None, volume_kms_key=None)

Return a Transformer that uses a SageMaker Model based on the training job.

It reuses the SageMaker Session and base job name used by the Estimator.

Parameters
  • instance_count (int) – Number of EC2 instances to use.

  • instance_type (str) – Type of EC2 instance to use, for example, ‘ml.c4.xlarge’.

  • strategy (str) – The strategy used to decide how to batch records in a single request (default: None). Valid values: ‘MultiRecord’ and ‘SingleRecord’.

  • assemble_with (str) – How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.

  • output_path (str) – S3 location for saving the transform result. If not specified, results are stored to a default bucket.

  • output_kms_key (str) – Optional. KMS key ID for encrypting the transform output (default: None).

  • accept (str) – The accept header passed by the client to the inference endpoint. If it is supported by the endpoint, it will be the format of the batch transform output.

  • env (dict) – Environment variables to be set for use during the transform job (default: None).

  • max_concurrent_transforms (int) – The maximum number of HTTP requests to be made to each individual transform container at one time.

  • max_payload (int) – Maximum size of the payload in a single HTTP request to the container in MB.

  • tags (list[dict]) – List of tags for labeling a transform job. If none specified, then the tags used for the training job are used for the transform job.

  • role (str) – The ExecutionRoleArn IAM Role ARN for the Model, which is also used during transform jobs. If not specified, the role from the Estimator will be used.

  • volume_kms_key (str) – Optional. KMS key ID for encrypting the volume attached to the ML compute instance (default: None).

fit(inputs=None, wait=True, logs=True, job_name=None)

Placeholder docstring

Parameters