TensorFlow¶
TensorFlow Estimator¶
-
class
sagemaker.tensorflow.estimator.
TensorFlow
(training_steps=None, evaluation_steps=None, checkpoint_path=None, py_version=None, framework_version=None, model_dir=None, requirements_file='', image_name=None, script_mode=False, distributions=None, **kwargs)¶ Bases:
sagemaker.estimator.Framework
Handle end-to-end training and deployment of user-provided TensorFlow code.
Initialize a
TensorFlow
estimator.Parameters: - training_steps (int) – Perform this many steps of training. None, the default means train forever.
- evaluation_steps (int) – Perform this many steps of evaluation. None, the default means that evaluation runs until input from eval_input_fn is exhausted (or another exception is raised).
- checkpoint_path (str) – Identifies S3 location where checkpoint data during model training can be saved (default: None). For distributed model training, this parameter is required.
- py_version (str) – Python version you want to use for executing your model training code (default: ‘py2’).
- framework_version (str) – TensorFlow version you want to use for executing your model training code. List of supported versions https://github.com/aws/sagemaker-python-sdk#tensorflow-sagemaker-estimators. If not specified, this will default to 1.11.
- model_dir (str) – S3 location where the checkpoint data and models can be exported to during training (default: None). If not specified a default S3 URI will be generated. It will be passed in the training script as one of the command line arguments.
- requirements_file (str) – Path to a
requirements.txt
file (default: ‘’). The path should be within and relative tosource_dir
. Details on the format can be found in the Pip User Guide: <https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format> - image_name (str) –
If specified, the estimator will use this image for training and hosting, instead of selecting the appropriate SageMaker official image based on framework_version and py_version. It can be an ECR url or dockerhub image and tag.
Examples
123.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0 custom-image:latest.
- script_mode (bool) – If set to True will the estimator will use the Script Mode containers (default: False). This will be ignored if py_version is set to ‘py3’.
- distributions (dict) –
A dictionary with information on how to run distributed training (default: None). Currently we support distributed training with parameter servers and MPI. To enable parameter server use the following setup:
{ ‘parameter_server’: { ‘enabled’: True } }
To enable MPI:
{ ‘mpi’: { ‘enabled’: True } }
- **kwargs – Additional kwargs passed to the Framework constructor.
Tip
You can find additional parameters for initializing this class at
Framework
andEstimatorBase
.-
LATEST_VERSION
= '2.0.0'¶
-
fit
(inputs=None, wait=True, logs=True, job_name=None, experiment_config=None, run_tensorboard_locally=False)¶ Train a model using the input training dataset.
See
fit()
for more details.Parameters: - inputs (str or dict or sagemaker.session.s3_input) –
Information about the training data. This can be one of three types:
- (str) - the S3 location where training data is saved.
- (dict[str, str] or dict[str, sagemaker.session.s3_input]) - If using multiple
- channels for training data, you can specify a dict mapping channel names
to strings or
s3_input()
objects.
- (sagemaker.session.s3_input) - channel configuration for S3 data sources that
- can provide additional information as well as the path to the training dataset.
See
sagemaker.session.s3_input()
for full details.
- wait (bool) – Whether the call should wait until the job completes (default: True).
- logs (bool) – Whether to show the logs produced by the job. Only meaningful when wait is True (default: True).
- job_name (str) – Training job name. If not specified, the estimator generates a default job name, based on the training image name and current timestamp.
- experiment_config (dict[str, str]) – Experiment management configuration.
- run_tensorboard_locally (bool) – Whether to execute TensorBoard in a different process with downloaded checkpoint information (default: False). This is an experimental feature, and requires TensorBoard and AWS CLI to be installed. It terminates TensorBoard when execution ends.
- inputs (str or dict or sagemaker.session.s3_input) –
-
create_model
(model_server_workers=None, role=None, vpc_config_override='VPC_CONFIG_DEFAULT', endpoint_type=None, entry_point=None, source_dir=None, dependencies=None, **kwargs)¶ Create a
Model
object that can be used for creating SageMaker model entities, deploying to a SageMaker endpoint, or starting SageMaker Batch Transform jobs.Parameters: - role (str) – The
ExecutionRoleArn
IAM Role ARN for theModel
, which is also used during transform jobs. If not specified, the role from the Estimator will be used. - model_server_workers (int) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
- vpc_config_override (dict[str, list[str]]) – Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
- endpoint_type (str) – Optional. Selects the software stack used by the inference server. If not specified, the model will be configured to use the default SageMaker model server. If ‘tensorflow-serving’, the model will be configured to use the SageMaker Tensorflow Serving container.
- entry_point (str) – Path (absolute or relative) to the local Python source file which
should be executed as the entry point to training. If not specified and
endpoint_type
is ‘tensorflow-serving’, no entry point is used. Ifendpoint_type
is alsoNone
, then the training entry point is used. - source_dir (str) – Path (absolute or relative) to a directory with any other serving
source code dependencies aside from the entry point file. If not specified and
endpoint_type
is ‘tensorflow-serving’, no source_dir is used. Ifendpoint_type
is alsoNone
, then the model source directory from training is used. - dependencies (list[str]) – A list of paths to directories (absolute or relative) with
any additional libraries that will be exported to the container.
If not specified and
endpoint_type
is ‘tensorflow-serving’,dependencies
is set toNone
. Ifendpoint_type
is alsoNone
, then the dependencies from training are used. - **kwargs – Additional kwargs passed to
Model
andTensorFlowModel
constructors.
Returns: - A
Model
object. SeeModel
orTensorFlowModel
for full details.
Return type: sagemaker.tensorflow.model.TensorFlowModel or sagemaker.tensorflow.serving.Model
- role (str) – The
-
hyperparameters
()¶ Return hyperparameters used by your custom TensorFlow code during model training.
-
train_image
()¶ Placeholder docstring
-
transformer
(instance_count, instance_type, strategy=None, assemble_with=None, output_path=None, output_kms_key=None, accept=None, env=None, max_concurrent_transforms=None, max_payload=None, tags=None, role=None, model_server_workers=None, volume_kms_key=None, endpoint_type=None, entry_point=None, vpc_config_override='VPC_CONFIG_DEFAULT')¶ Return a
Transformer
that uses a SageMaker Model based on the training job. It reuses the SageMaker Session and base job name used by the Estimator.Parameters: - instance_count (int) – Number of EC2 instances to use.
- instance_type (str) – Type of EC2 instance to use, for example, ‘ml.c4.xlarge’.
- strategy (str) – The strategy used to decide how to batch records in a single request (default: None). Valid values: ‘MULTI_RECORD’ and ‘SINGLE_RECORD’.
- assemble_with (str) – How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.
- output_path (str) – S3 location for saving the transform result. If not specified, results are stored to a default bucket.
- output_kms_key (str) – Optional. KMS key ID for encrypting the transform output (default: None).
- accept (str) – The accept header passed by the client to the inference endpoint. If it is supported by the endpoint, it will be the format of the batch transform output.
- env (dict) – Environment variables to be set for use during the transform job (default: None).
- max_concurrent_transforms (int) – The maximum number of HTTP requests to be made to each individual transform container at one time.
- max_payload (int) – Maximum size of the payload in a single HTTP request to the container in MB.
- tags (list[dict]) – List of tags for labeling a transform job. If none specified, then the tags used for the training job are used for the transform job.
- role (str) – The
ExecutionRoleArn
IAM Role ARN for theModel
, which is also used during transform jobs. If not specified, the role from the Estimator will be used. - model_server_workers (int) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
- volume_kms_key (str) – Optional. KMS key ID for encrypting the volume attached to the ML compute instance (default: None).
- endpoint_type (str) – Optional. Selects the software stack used by the inference server. If not specified, the model will be configured to use the default SageMaker model server. If ‘tensorflow-serving’, the model will be configured to use the SageMaker Tensorflow Serving container.
- entry_point (str) – Path (absolute or relative) to the local Python source file which
should be executed as the entry point to training. If not specified and
endpoint_type
is ‘tensorflow-serving’, no entry point is used. Ifendpoint_type
is alsoNone
, then the training entry point is used. - vpc_config_override (dict[str, list[str]]) – Optional override for the VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
TensorFlow Model¶
-
class
sagemaker.tensorflow.model.
TensorFlowModel
(model_data, role, entry_point, image=None, py_version='py2', framework_version=None, predictor_cls=<class 'sagemaker.tensorflow.model.TensorFlowPredictor'>, model_server_workers=None, **kwargs)¶ Bases:
sagemaker.model.FrameworkModel
Placeholder docstring
Initialize an TensorFlowModel.
Parameters: - model_data (str) – The S3 location of a SageMaker model data
.tar.gz
file. - role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.
- entry_point (str) – Path (absolute or relative) to the Python source file which should be executed as the entry point to model hosting. This should be compatible with either Python 2.7 or Python 3.5.
- image (str) – A Docker image URI (default: None). If not specified, a default image for TensorFlow will be used.
- py_version (str) – Python version you want to use for executing your model training code (default: ‘py2’).
- framework_version (str) – TensorFlow version you want to use for executing your model training code.
- predictor_cls (callable[str, sagemaker.session.Session]) – A function
to call to create a predictor with an endpoint name and
SageMaker
Session
. If specified,deploy()
returns the result of invoking this function on the created endpoint name. - model_server_workers (int) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
- **kwargs – Keyword arguments passed to the
FrameworkModel
initializer.
Tip
You can find additional parameters for initializing this class at
FrameworkModel
andModel
.-
prepare_container_def
(instance_type, accelerator_type=None)¶ Return a container definition with framework configuration set in model environment variables.
This also uploads user-supplied code to S3.
Parameters: Returns: A container definition object usable with the CreateModel API.
Return type:
-
serving_image_uri
(region_name, instance_type, accelerator_type=None)¶ Create a URI for the serving image.
Parameters: - region_name (str) – AWS region where the image is uploaded.
- instance_type (str) – SageMaker instance type. Used to determine device type (cpu/gpu/family-specific optimized).
- accelerator_type (str) – The Elastic Inference accelerator type to deploy to the instance for loading and making inferences to the model (default: None). For example, ‘ml.eia1.medium’.
Returns: The appropriate image URI based on the given parameters.
Return type:
- model_data (str) – The S3 location of a SageMaker model data
TensorFlow Predictor¶
-
class
sagemaker.tensorflow.model.
TensorFlowPredictor
(endpoint_name, sagemaker_session=None)¶ Bases:
sagemaker.predictor.RealTimePredictor
A
RealTimePredictor
for inference against TensorFlow endpoint.This is able to serialize Python lists, dictionaries, and numpy arrays to multidimensional tensors for inference
Initialize an
TensorFlowPredictor
.Parameters: - endpoint_name (str) – The name of the endpoint to perform inference on.
- sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.
TensorFlow Serving Model¶
-
class
sagemaker.tensorflow.serving.
Model
(model_data, role, entry_point=None, image=None, framework_version='1.11', container_log_level=None, predictor_cls=<class 'sagemaker.tensorflow.serving.Predictor'>, **kwargs)¶ Bases:
sagemaker.model.FrameworkModel
Placeholder docstring
Initialize a Model.
Parameters: - model_data (str) – The S3 location of a SageMaker model data
.tar.gz
file. - role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker APIs that create Amazon SageMaker endpoints use this role to access model artifacts.
- entry_point –
- image (str) – A Docker image URI (default: None). If not specified, a default image for TensorFlow Serving will be used.
- framework_version (str) – Optional. TensorFlow Serving version you want to use.
- container_log_level (int) – Log level to use within the container (default: logging.ERROR). Valid values are defined in the Python logging module.
- predictor_cls (callable[str, sagemaker.session.Session]) – A function
to call to create a predictor with an endpoint name and
SageMaker
Session
. If specified,deploy()
returns the result of invoking this function on the created endpoint name. - **kwargs – Keyword arguments passed to the
Model
initializer.
Tip
You can find additional parameters for initializing this class at
FrameworkModel
andModel
.-
FRAMEWORK_NAME
= 'tensorflow-serving'¶
-
LOG_LEVEL_PARAM_NAME
= 'SAGEMAKER_TFS_NGINX_LOGLEVEL'¶
-
LOG_LEVEL_MAP
= {10: 'debug', 20: 'info', 30: 'warn', 40: 'error', 50: 'crit'}¶
-
LATEST_EIA_VERSION
= [1, 14]¶
-
deploy
(initial_instance_count, instance_type, accelerator_type=None, endpoint_name=None, update_endpoint=False, tags=None, kms_key=None, wait=True, data_capture_config=None)¶ Deploy this
Model
to anEndpoint
and optionally return aPredictor
.Create a SageMaker
Model
andEndpointConfig
, and deploy anEndpoint
from thisModel
. Ifself.predictor_cls
is not None, this method returns a the result of invokingself.predictor_cls
on the created endpoint name.The name of the created model is accessible in the
name
field of thisModel
after deploy returnsThe name of the created endpoint is accessible in the
endpoint_name
field of thisModel
after deploy returns.Parameters: - initial_instance_count (int) – The initial number of instances to run
in the
Endpoint
created from thisModel
. - instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’, or ‘local’ for local mode.
- accelerator_type (str) – Type of Elastic Inference accelerator to deploy this model for model loading and inference, for example, ‘ml.eia1.medium’. If not specified, no Elastic Inference accelerator will be attached to the endpoint. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
- endpoint_name (str) – The name of the endpoint to create (default: None). If not specified, a unique endpoint name will be created.
- update_endpoint (bool) – Flag to update the model in an existing Amazon SageMaker endpoint. If True, this will deploy a new EndpointConfig to an already existing endpoint and delete resources corresponding to the previous EndpointConfig. If False, a new endpoint will be created. Default: False
- tags (List[dict[str, str]]) – The list of tags to attach to this specific endpoint.
- kms_key (str) – The ARN of the KMS key that is used to encrypt the data on the storage volume attached to the instance hosting the endpoint.
- wait (bool) – Whether the call should wait until the deployment of this model completes (default: True).
- data_capture_config (sagemaker.model_monitor.DataCaptureConfig) – Specifies configuration related to Endpoint data capture for use with Amazon SageMaker Model Monitoring. Default: None.
Returns: - Invocation of
self.predictor_cls
on the created endpoint name, ifself.predictor_cls
is not None. Otherwise, return None.
Return type: callable[string, sagemaker.session.Session] or None
- initial_instance_count (int) – The initial number of instances to run
in the
-
prepare_container_def
(instance_type, accelerator_type=None)¶ Parameters: - instance_type –
- accelerator_type –
-
serving_image_uri
(region_name, instance_type, accelerator_type=None)¶ Create a URI for the serving image.
Parameters: - region_name (str) – AWS region where the image is uploaded.
- instance_type (str) – SageMaker instance type. Used to determine device type (cpu/gpu/family-specific optimized).
- accelerator_type (str) – The Elastic Inference accelerator type to deploy to the instance for loading and making inferences to the model (default: None). For example, ‘ml.eia1.medium’.
Returns: The appropriate image URI based on the given parameters.
Return type:
- model_data (str) – The S3 location of a SageMaker model data
TensorFlow Serving Predictor¶
-
class
sagemaker.tensorflow.serving.
Predictor
(endpoint_name, sagemaker_session=None, serializer=<sagemaker.predictor._JsonSerializer object>, deserializer=<sagemaker.predictor._JsonDeserializer object>, content_type=None, model_name=None, model_version=None)¶ Bases:
sagemaker.predictor.RealTimePredictor
A
RealTimePredictor
implementation for inference against TensorFlow Serving endpoints.Initialize a
TFSPredictor
. Seesagemaker.RealTimePredictor
for more info about parameters.Parameters: - endpoint_name (str) – The name of the endpoint to perform inference on.
- sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.
- serializer (callable) – Optional. Default serializes input data to json. Handles dicts, lists, and numpy arrays.
- deserializer (callable) – Optional. Default parses the response using
json.load(...)
. - content_type (str) – Optional. The “ContentType” for invocation
requests. If specified, overrides the
content_type
from the serializer (default: None). - model_name (str) – Optional. The name of the SavedModel model that should handle the request. If not specified, the endpoint’s default model will handle the request.
- model_version (str) – Optional. The version of the SavedModel model that should handle the request. If not specified, the latest version of the model will be used.
-
classify
(data)¶ Parameters: data –
-
regress
(data)¶ Parameters: data –
-
predict
(data, initial_args=None)¶ Parameters: - data –
- initial_args –