Session¶
Placeholder docstring
-
class
sagemaker.session.
LogState
¶ Bases:
object
Placeholder docstring
-
STARTING
= 1¶
-
WAIT_IN_PROGRESS
= 2¶
-
TAILING
= 3¶
-
JOB_COMPLETE
= 4¶
-
COMPLETE
= 5¶
-
-
class
sagemaker.session.
Session
(boto_session=None, sagemaker_client=None, sagemaker_runtime_client=None)¶ Bases:
object
Manage interactions with the Amazon SageMaker APIs and any other AWS services needed.
This class provides convenient methods for manipulating entities and resources that Amazon SageMaker uses, such as training jobs, endpoints, and input datasets in S3.
AWS service calls are delegated to an underlying Boto3 session, which by default is initialized using the AWS configuration chain. When you make an Amazon SageMaker API call that accesses an S3 bucket location and one is not specified, the
Session
creates a default bucket based on a naming convention which includes the current AWS account ID.Initialize a SageMaker
Session
.Parameters: - boto_session (boto3.session.Session) – The underlying Boto3 session which AWS service calls are delegated to (default: None). If not provided, one is created with default AWS configuration chain.
- sagemaker_client (boto3.SageMaker.Client) – Client which makes Amazon SageMaker service
calls other than
InvokeEndpoint
(default: None). Estimators created using thisSession
use this client. If not provided, one will be created using this instance’sboto_session
. - sagemaker_runtime_client (boto3.SageMakerRuntime.Client) – Client which makes
InvokeEndpoint
calls to Amazon SageMaker (default: None). Predictors created using thisSession
use this client. If not provided, one will be created using this instance’sboto_session
.
-
boto_region_name
¶ Placeholder docstring
-
upload_data
(path, bucket=None, key_prefix='data', extra_args=None)¶ Upload local file or directory to S3.
If a single file is specified for upload, the resulting S3 object key is
{key_prefix}/{filename}
(filename does not include the local path, if any specified).If a directory is specified for upload, the API uploads all content, recursively, preserving relative structure of subdirectories. The resulting object key names are:
{key_prefix}/{relative_subdirectory_path}/filename
.Parameters: - path (str) – Path (absolute or relative) of local file or directory to upload.
- bucket (str) – Name of the S3 Bucket to upload to (default: None). If not specified, the
default bucket of the
Session
is used (if default bucket does not exist, theSession
creates it). - key_prefix (str) – Optional S3 object key name prefix (default: ‘data’). S3 uses the prefix to create a directory structure for the bucket content that it display in the S3 console.
- extra_args (dict) – Optional extra arguments that may be passed to the upload operation. Similar to ExtraArgs parameter in S3 upload_file function. Please refer to the ExtraArgs parameter documentation here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html#the-extraargs-parameter
Returns: - The S3 URI of the uploaded file(s). If a file is specified in the path argument,
the URI format is:
s3://{bucket name}/{key_prefix}/{original_file_name}
. If a directory is specified in the path argument, the URI format iss3://{bucket name}/{key_prefix}
.
Return type:
-
default_bucket
()¶ Return the name of the default bucket to use in relevant Amazon SageMaker interactions.
Returns: - The name of the default bucket, which is of the form:
sagemaker-{region}-{AWS account ID}
.
Return type: str
-
train
(input_mode, input_config, role, job_name, output_config, resource_config, vpc_config, hyperparameters, stop_condition, tags, metric_definitions, enable_network_isolation=False, image=None, algorithm_arn=None, encrypt_inter_container_traffic=False, train_use_spot_instances=False, checkpoint_s3_uri=None, checkpoint_local_path=None)¶ Create an Amazon SageMaker training job.
Parameters: - input_mode (str) – The input mode that the algorithm supports. Valid modes: * ‘File’ - Amazon SageMaker copies the training dataset from the S3 location to a directory in the Docker container. * ‘Pipe’ - Amazon SageMaker streams data directly from S3 to the container via a Unix-named pipe.
- input_config (list) – A list of Channel objects. Each channel is a named input source. Please refer to the format details described: https://botocore.readthedocs.io/en/latest/reference/services/sagemaker.html#SageMaker.Client.create_training_job
- role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. You must grant sufficient permissions to this role.
- job_name (str) – Name of the training job being created.
- output_config (dict) – The S3 URI where you want to store the training results and optional KMS key ID.
- resource_config (dict) – Contains values for ResourceConfig: * instance_count (int): Number of EC2 instances to use for training. The key in resource_config is ‘InstanceCount’. * instance_type (str): Type of EC2 instance to use for training, for example, ‘ml.c4.xlarge’. The key in resource_config is ‘InstanceType’.
- vpc_config (dict) – Contains values for VpcConfig: * subnets (list[str]): List of subnet ids. The key in vpc_config is ‘Subnets’. * security_group_ids (list[str]): List of security group ids. The key in vpc_config is ‘SecurityGroupIds’.
- hyperparameters (dict) – Hyperparameters for model training. The hyperparameters are
made accessible as a dict[str, str] to the training code on SageMaker. For
convenience, this accepts other types for keys and values, but
str()
will be called to convert them before training. - stop_condition (dict) – Defines when training shall finish. Contains entries that can
be understood by the service like
MaxRuntimeInSeconds
. - tags (list[dict]) – List of tags for labeling a training job. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
- metric_definitions (list[dict]) – A list of dictionaries that defines the metric(s) used to evaluate the training jobs. Each dictionary contains two keys: ‘Name’ for the name of the metric, and ‘Regex’ for the regular expression used to extract the metric from the logs.
- enable_network_isolation (bool) – Whether to request for the training job to run with network isolation or not.
- image (str) – Docker image containing training code.
- algorithm_arn (str) – Algorithm Arn from Marketplace.
- encrypt_inter_container_traffic (bool) – Specifies whether traffic between training
containers is encrypted for the training job (default:
False
). - train_use_spot_instances (bool) – whether to use spot instances for training.
- checkpoint_s3_uri (str) – The S3 URI in which to persist checkpoints
that the algorithm persists (if any) during training. (default:
None
). - checkpoint_local_path (str) – The local path that the algorithm
writes its checkpoints to. SageMaker will persist all files
under this path to checkpoint_s3_uri continually during
training. On job startup the reverse happens - data from the
s3 location is downloaded to this path before the algorithm is
started. If the path is unset then SageMaker assumes the
checkpoints will be provided under /opt/ml/checkpoints/.
(default:
None
).
Returns: ARN of the training job, if it is created.
Return type:
-
compile_model
(input_model_config, output_model_config, role, job_name, stop_condition, tags)¶ Create an Amazon SageMaker Neo compilation job.
Parameters: - input_model_config (dict) – the trained model and the Amazon S3 location where it is stored.
- output_model_config (dict) – Identifies the Amazon S3 location where you want Amazon SageMaker Neo to save the results of compilation job
- role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker Neo compilation jobs use this role to access model artifacts. You must grant sufficient permissions to this role.
- job_name (str) – Name of the compilation job being created.
- stop_condition (dict) – Defines when compilation job shall finish. Contains entries
that can be understood by the service like
MaxRuntimeInSeconds
. - tags (list[dict]) – List of tags for labeling a compile model job. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
Returns: ARN of the compile model job, if it is created.
Return type:
-
tune
(job_name, strategy, objective_type, objective_metric_name, max_jobs, max_parallel_jobs, parameter_ranges, static_hyperparameters, input_mode, metric_definitions, role, input_config, output_config, resource_config, stop_condition, tags, warm_start_config, enable_network_isolation=False, image=None, algorithm_arn=None, early_stopping_type='Off', encrypt_inter_container_traffic=False, vpc_config=None, train_use_spot_instances=False, checkpoint_s3_uri=None, checkpoint_local_path=None)¶ Create an Amazon SageMaker hyperparameter tuning job
Parameters: - job_name (str) – Name of the tuning job being created.
- strategy (str) – Strategy to be used for hyperparameter estimations.
- objective_type (str) – The type of the objective metric for evaluating training jobs. This value can be either ‘Minimize’ or ‘Maximize’.
- objective_metric_name (str) – Name of the metric for evaluating training jobs.
- max_jobs (int) – Maximum total number of training jobs to start for the hyperparameter tuning job.
- max_parallel_jobs (int) – Maximum number of parallel training jobs to start.
- parameter_ranges (dict) – Dictionary of parameter ranges. These parameter ranges can be one of three types: Continuous, Integer, or Categorical.
- static_hyperparameters (dict) – Hyperparameters for model training. These hyperparameters remain unchanged across all of the training jobs for the hyperparameter tuning job. The hyperparameters are made accessible as a dictionary for the training code on SageMaker.
- image (str) – Docker image containing training code.
- input_mode (str) – The input mode that the algorithm supports. Valid modes: * ‘File’ - Amazon SageMaker copies the training dataset from the S3 location to a directory in the Docker container. * ‘Pipe’ - Amazon SageMaker streams data directly from S3 to the container via a Unix-named pipe.
- metric_definitions (list[dict]) – A list of dictionaries that defines the metric(s) used to evaluate the training jobs. Each dictionary contains two keys: ‘Name’ for the name of the metric, and ‘Regex’ for the regular expression used to extract the metric from the logs. This should be defined only for jobs that don’t use an Amazon algorithm.
- role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. You must grant sufficient permissions to this role.
- input_config (list) – A list of Channel objects. Each channel is a named input source. Please refer to the format details described: https://botocore.readthedocs.io/en/latest/reference/services/sagemaker.html#SageMaker.Client.create_training_job
- output_config (dict) – The S3 URI where you want to store the training results and optional KMS key ID.
- resource_config (dict) – Contains values for ResourceConfig: * instance_count (int): Number of EC2 instances to use for training. The key in resource_config is ‘InstanceCount’. * instance_type (str): Type of EC2 instance to use for training, for example, ‘ml.c4.xlarge’. The key in resource_config is ‘InstanceType’.
- stop_condition (dict) – When training should finish, e.g.
MaxRuntimeInSeconds
. - tags (list[dict]) – List of tags for labeling the tuning job. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
- warm_start_config (dict) – Configuration defining the type of warm start and other required configurations.
- early_stopping_type (str) – Specifies whether early stopping is enabled for the job. Can be either ‘Auto’ or ‘Off’. If set to ‘Off’, early stopping will not be attempted. If set to ‘Auto’, early stopping of some training jobs may happen, but is not guaranteed to.
- encrypt_inter_container_traffic (bool) – Specifies whether traffic between training
containers is encrypted for the training jobs started for this hyperparameter
tuning job (default:
False
). - vpc_config (dict) – Contains values for VpcConfig (default: None): * subnets (list[str]): List of subnet ids. The key in vpc_config is ‘Subnets’. * security_group_ids (list[str]): List of security group ids. The key in vpc_config is ‘SecurityGroupIds’.
- train_use_spot_instances (bool) – whether to use spot instances for training.
- checkpoint_s3_uri (str) – The S3 URI in which to persist checkpoints
that the algorithm persists (if any) during training. (default:
None
). - checkpoint_local_path (str) – The local path that the algorithm
writes its checkpoints to. SageMaker will persist all files
under this path to checkpoint_s3_uri continually during
training. On job startup the reverse happens - data from the
s3 location is downloaded to this path before the algorithm is
started. If the path is unset then SageMaker assumes the
checkpoints will be provided under /opt/ml/checkpoints/.
(default:
None
).
-
stop_tuning_job
(name)¶ Stop the Amazon SageMaker hyperparameter tuning job with the specified name.
Parameters: name (str) – Name of the Amazon SageMaker hyperparameter tuning job. Raises: ClientError
– If an error occurs while trying to stop the hyperparameter tuning job.
-
transform
(job_name, model_name, strategy, max_concurrent_transforms, max_payload, env, input_config, output_config, resource_config, tags, data_processing)¶ Create an Amazon SageMaker transform job.
Parameters: - job_name (str) – Name of the transform job being created.
- model_name (str) – Name of the SageMaker model being used for the transform job.
- strategy (str) – The strategy used to decide how to batch records in a single request. Possible values are ‘MULTI_RECORD’ and ‘SINGLE_RECORD’.
- max_concurrent_transforms (int) – The maximum number of HTTP requests to be made to each individual transform container at one time.
- max_payload (int) – Maximum size of the payload in a single HTTP request to the container in MB.
- env (dict) – Environment variables to be set for use during the transform job.
- input_config (dict) – A dictionary describing the input data (and its location) for the job.
- output_config (dict) – A dictionary describing the output location for the job.
- resource_config (dict) – A dictionary describing the resources to complete the job.
- tags (list[dict]) – List of tags for labeling a transform job.
- data_processing (dict) – A dictionary describing config for combining the input data and transformed data. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
-
create_model
(name, role, container_defs, vpc_config=None, enable_network_isolation=False, primary_container=None, tags=None)¶ Create an Amazon SageMaker
Model
. Specify the S3 location of the model artifacts and Docker image containing the inference code. Amazon SageMaker uses this information to deploy the model in Amazon SageMaker. This method can also be used to create a Model for an Inference Pipeline if you pass the list of container definitions through the containers parameter.Parameters: - name (str) – Name of the Amazon SageMaker
Model
to create. - role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. You must grant sufficient permissions to this role.
- container_defs (list[dict[str, str]] or [dict[str, str]]) – A single container
definition or a list of container definitions which will be invoked sequentially
while performing the prediction. If the list contains only one container, then
it’ll be passed to SageMaker Hosting as the
PrimaryContainer
and otherwise, it’ll be passed asContainers
.You can also specify the return value ofsagemaker.get_container_def()
orsagemaker.pipeline_container_def()
, which will used to create more advanced container configurations, including model containers which need artifacts from S3. - vpc_config (dict[str, list[str]]) – The VpcConfig set on the model (default: None) * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
- enable_network_isolation (bool) – Wether the model requires network isolation or not.
- primary_container (str or dict[str, str]) – Docker image which defines the inference
code. You can also specify the return value of
sagemaker.container_def()
, which is used to create more advanced container configurations, including model containers which need artifacts from S3. This field is deprecated, please use container_defs instead. - tags (List[dict[str, str]]) – Optional. The list of tags to add to the model.
Example
>>> tags = [{'Key': 'tagname', 'Value': 'tagvalue'}] For more information about tags, see https://boto3.amazonaws.com/v1/documentation /api/latest/reference/services/sagemaker.html#SageMaker.Client.add_tags
Returns: Name of the Amazon SageMaker Model
created.Return type: str - name (str) – Name of the Amazon SageMaker
-
create_model_from_job
(training_job_name, name=None, role=None, primary_container_image=None, model_data_url=None, env=None, vpc_config_override='VPC_CONFIG_DEFAULT', tags=None)¶ Create an Amazon SageMaker
Model
from a SageMaker Training Job.Parameters: - training_job_name (str) – The Amazon SageMaker Training Job name.
- name (str) – The name of the SageMaker
Model
to create (default: None). If not specified, the training job name is used. - role (str) – The
ExecutionRoleArn
IAM Role ARN for theModel
, specified either by an IAM role name or role ARN. If None, theRoleArn
from the SageMaker Training Job will be used. - primary_container_image (str) – The Docker image reference (default: None). If None, it
defaults to the Training Image in
training_job_name
. - model_data_url (str) – S3 location of the model data (default: None). If None, defaults
to the
ModelS3Artifacts
oftraining_job_name
. - env (dict[string,string]) – Model environment variables (default: {}).
- vpc_config_override (dict[str, list[str]]) – Optional override for VpcConfig set on the model. Default: use VpcConfig from training job. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
- tags (List[dict[str, str]]) – Optional. The list of tags to add to the model. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
Returns: The name of the created
Model
.Return type:
-
create_model_package_from_algorithm
(name, description, algorithm_arn, model_data)¶ Create a SageMaker Model Package from the results of training with an Algorithm Package
Parameters:
-
wait_for_model_package
(model_package_name, poll=5)¶ Wait for an Amazon SageMaker endpoint deployment to complete.
Parameters: Returns: Return value from the
DescribeEndpoint
API.Return type:
-
create_endpoint_config
(name, model_name, initial_instance_count, instance_type, accelerator_type=None, tags=None, kms_key=None)¶ Create an Amazon SageMaker endpoint configuration.
The endpoint configuration identifies the Amazon SageMaker model (created using the
CreateModel
API) and the hardware configuration on which to deploy the model. Provide this endpoint configuration to theCreateEndpoint
API, which then launches the hardware and deploys the model.Parameters: - name (str) – Name of the Amazon SageMaker endpoint configuration to create.
- model_name (str) – Name of the Amazon SageMaker
Model
. - initial_instance_count (int) – Minimum number of EC2 instances to launch. The actual number of active instances for an endpoint at any given time varies due to autoscaling.
- instance_type (str) – Type of EC2 instance to launch, for example, ‘ml.c4.xlarge’.
- accelerator_type (str) – Type of Elastic Inference accelerator to attach to the instance. For example, ‘ml.eia1.medium’. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
- tags (List[dict[str, str]]) – Optional. The list of tags to add to the endpoint config.
Example
>>> tags = [{'Key': 'tagname', 'Value': 'tagvalue'}] For more information about tags, see https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.add_tags
Returns: Name of the endpoint point configuration created. Return type: str
-
create_endpoint
(endpoint_name, config_name, tags=None, wait=True)¶ Create an Amazon SageMaker
Endpoint
according to the endpoint configuration specified in the request.Once the
Endpoint
is created, client applications can send requests to obtain inferences. The endpoint configuration is created using theCreateEndpointConfig
API.Parameters: Returns: Name of the Amazon SageMaker
Endpoint
created.Return type:
-
update_endpoint
(endpoint_name, endpoint_config_name)¶ Update an Amazon SageMaker
Endpoint
according to the endpoint configuration specified in the requestRaise an error if endpoint with endpoint_name does not exist.
Parameters: Returns: Name of the Amazon SageMaker
Endpoint
being updated.Return type:
-
delete_endpoint
(endpoint_name)¶ Delete an Amazon SageMaker
Endpoint
.Parameters: endpoint_name (str) – Name of the Amazon SageMaker Endpoint
to delete.
-
delete_endpoint_config
(endpoint_config_name)¶ Delete an Amazon SageMaker endpoint configuration.
Parameters: endpoint_config_name (str) – Name of the Amazon SageMaker endpoint configuration to delete.
-
delete_model
(model_name)¶ Delete an Amazon SageMaker Model.
Parameters: model_name (str) – Name of the Amazon SageMaker model to delete.
-
wait_for_job
(job, poll=5)¶ Wait for an Amazon SageMaker training job to complete.
Parameters: Returns: Return value from the
DescribeTrainingJob
API.Return type: (dict)
Raises: exceptions.UnexpectedStatusException
– If the training job fails.
-
wait_for_compilation_job
(job, poll=5)¶ Wait for an Amazon SageMaker Neo compilation job to complete.
Parameters: Returns: Return value from the
DescribeCompilationJob
API.Return type: (dict)
Raises: exceptions.UnexpectedStatusException
– If the compilation job fails.
-
wait_for_tuning_job
(job, poll=5)¶ Wait for an Amazon SageMaker hyperparameter tuning job to complete.
Parameters: Returns: Return value from the
DescribeHyperParameterTuningJob
API.Return type: (dict)
Raises: exceptions.UnexpectedStatusException
– If the hyperparameter tuning job fails.
-
wait_for_transform_job
(job, poll=5)¶ Wait for an Amazon SageMaker transform job to complete.
Parameters: Returns: Return value from the
DescribeTransformJob
API.Return type: (dict)
Raises: exceptions.UnexpectedStatusException
– If the transform job fails.
-
stop_transform_job
(name)¶ Stop the Amazon SageMaker hyperparameter tuning job with the specified name.
Parameters: name (str) – Name of the Amazon SageMaker batch transform job. Raises: ClientError
– If an error occurs while trying to stop the batch transform job.
-
wait_for_endpoint
(endpoint, poll=5)¶ Wait for an Amazon SageMaker endpoint deployment to complete.
Parameters: Returns: Return value from the
DescribeEndpoint
API.Return type:
-
endpoint_from_job
(job_name, initial_instance_count, instance_type, deployment_image=None, name=None, role=None, wait=True, model_environment_vars=None, vpc_config_override='VPC_CONFIG_DEFAULT', accelerator_type=None)¶ Create an
Endpoint
using the results of a successful training job.Specify the job name, Docker image containing the inference code, and hardware configuration to deploy the model. Internally the API, creates an Amazon SageMaker model (that describes the model artifacts and the Docker image containing inference code), endpoint configuration (describing the hardware to deploy for hosting the model), and creates an
Endpoint
(launches the EC2 instances and deploys the model on them). In response, the API returns the endpoint name to which you can send requests for inferences.Parameters: - job_name (str) – Name of the training job to deploy the results of.
- initial_instance_count (int) – Minimum number of EC2 instances to launch. The actual number of active instances for an endpoint at any given time varies due to autoscaling.
- instance_type (str) – Type of EC2 instance to deploy to an endpoint for prediction, for example, ‘ml.c4.xlarge’.
- deployment_image (str) – The Docker image which defines the inference code to be used as the entry point for accepting prediction requests. If not specified, uses the image used for the training job.
- name (str) – Name of the
Endpoint
to create. If not specified, uses the training job name. - role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. You must grant sufficient permissions to this role.
- wait (bool) – Whether to wait for the endpoint deployment to complete before returning (default: True).
- model_environment_vars (dict[str, str]) – Environment variables to set on the model container (default: None).
- vpc_config_override (dict[str, list[str]]) – Overrides VpcConfig set on the model. Default: use VpcConfig from training job. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
- accelerator_type (str) – Type of Elastic Inference accelerator to attach to the instance. For example, ‘ml.eia1.medium’. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
Returns: Name of the
Endpoint
that is created.Return type:
-
endpoint_from_model_data
(model_s3_location, deployment_image, initial_instance_count, instance_type, name=None, role=None, wait=True, model_environment_vars=None, model_vpc_config=None, accelerator_type=None)¶ Create and deploy to an
Endpoint
using existing model data stored in S3.Parameters: - model_s3_location (str) – S3 URI of the model artifacts to use for the endpoint.
- deployment_image (str) – The Docker image which defines the runtime code to be used as the entry point for accepting prediction requests.
- initial_instance_count (int) – Minimum number of EC2 instances to launch. The actual number of active instances for an endpoint at any given time varies due to autoscaling.
- instance_type (str) – Type of EC2 instance to deploy to an endpoint for prediction, e.g. ‘ml.c4.xlarge’.
- name (str) – Name of the
Endpoint
to create. If not specified, uses a name generated by combining the image name with a timestamp. - role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. You must grant sufficient permissions to this role.
- wait (bool) – Whether to wait for the endpoint deployment to complete before returning (default: True).
- model_environment_vars (dict[str, str]) – Environment variables to set on the model container (default: None).
- model_vpc_config (dict[str, list[str]]) – The VpcConfig set on the model (default: None) * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
- accelerator_type (str) – Type of Elastic Inference accelerator to attach to the instance. For example, ‘ml.eia1.medium’. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
Returns: Name of the
Endpoint
that is created.Return type:
-
endpoint_from_production_variants
(name, production_variants, tags=None, kms_key=None, wait=True)¶ Create an SageMaker
Endpoint
from a list of production variants.Parameters: - name (str) – The name of the
Endpoint
to create. - production_variants (list[dict[str, str]]) – The list of production variants to deploy.
- tags (list[dict[str, str]]) – A list of key-value pairs for tagging the endpoint (default: None).
- kms_key (str) – The KMS key that is used to encrypt the data on the storage volume attached to the instance hosting the endpoint.
- wait (bool) – Whether to wait for the endpoint deployment to complete before returning (default: True).
Returns: The name of the created
Endpoint
.Return type: - name (str) – The name of the
-
expand_role
(role)¶ Expand an IAM role name into an ARN.
If the role is already in the form of an ARN, then the role is simply returned. Otherwise we retrieve the full ARN and return it.
Parameters: role (str) – An AWS IAM role (either name or full ARN). Returns: The corresponding AWS IAM role ARN. Return type: str
-
get_caller_identity_arn
()¶ Returns the ARN user or role whose credentials are used to call the API.
Returns: The ARN user or role Return type: str
-
logs_for_job
(job_name, wait=False, poll=10)¶ Display the logs for a given training job, optionally tailing them until the job is complete. If the output is a tty or a Jupyter cell, it will be color-coded based on which instance the log entry is from.
Parameters: Raises: exceptions.UnexpectedStatusException
– If waiting and the training job fails.
-
logs_for_transform_job
(job_name, wait=False, poll=10)¶ Display the logs for a given transform job, optionally tailing them until the job is complete. If the output is a tty or a Jupyter cell, it will be color-coded based on which instance the log entry is from.
Parameters: Raises: ValueError
– If the transform job fails.
-
sagemaker.session.
container_def
(image, model_data_url=None, env=None)¶ Create a definition for executing a container as part of a SageMaker model.
Parameters: Returns: A complete container definition object usable with the CreateModel API if passed via PrimaryContainers field.
Return type:
-
sagemaker.session.
pipeline_container_def
(models, instance_type=None)¶ Create a definition for executing a pipeline of containers as part of a SageMaker model.
Parameters: Returns: - list of container definition objects usable with with the
CreateModel API for inference pipelines if passed via Containers field.
Return type:
-
sagemaker.session.
production_variant
(model_name, instance_type, initial_instance_count=1, variant_name='AllTraffic', initial_weight=1, accelerator_type=None)¶ Create a production variant description suitable for use in a
ProductionVariant
list as part of aCreateEndpointConfig
request.Parameters: - model_name (str) – The name of the SageMaker model this production variant references.
- instance_type (str) – The EC2 instance type for this production variant. For example, ‘ml.c4.8xlarge’.
- initial_instance_count (int) – The initial instance count for this production variant (default: 1).
- variant_name (string) – The
VariantName
of this production variant (default: ‘AllTraffic’). - initial_weight (int) – The relative
InitialVariantWeight
of this production variant (default: 1). - accelerator_type (str) – Type of Elastic Inference accelerator for this production variant. For example, ‘ml.eia1.medium’. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
Returns: An SageMaker
ProductionVariant
descriptionReturn type:
-
sagemaker.session.
get_execution_role
(sagemaker_session=None)¶ Return the role ARN whose credentials are used to call the API. Throws an exception if :param sagemaker_session: Current sagemaker session :type sagemaker_session: Session
Returns: The role ARN Return type: (str)
-
class
sagemaker.session.
ShuffleConfig
(seed)¶ Bases:
object
Used to configure channel shuffling using a seed. See SageMaker documentation for more detail: https://docs.aws.amazon.com/sagemaker/latest/dg/API_ShuffleConfig.html
Create a ShuffleConfig. :param seed: the long value used to seed the shuffled sequence. :type seed: long
-
class
sagemaker.session.
ModelContainer
(model_data, image, env=None)¶ Bases:
object
Amazon SageMaker Model configurations for inference pipelines.
-
model_data
¶ str – S3 Model artifact location
-
image
¶ str – Docker image URL in ECR
-
env
¶ dict[str,str] – Environment variable mapping
Create a definition of a model which can be part of an Inference Pipeline
Parameters: -