Model¶
-
class
sagemaker.model.
Model
(model_data, image, role=None, predictor_cls=None, env=None, name=None, vpc_config=None, sagemaker_session=None)¶ Bases:
object
A SageMaker
Model
that can be deployed to anEndpoint
.Initialize an SageMaker
Model
.Parameters: - model_data (str) – The S3 location of a SageMaker model data
.tar.gz
file. - image (str) – A Docker image URI.
- role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs
that create Amazon SageMaker endpoints use this role to access training data and model artifacts.
After the endpoint is created, the inference code might use the IAM role if it needs to access some AWS
resources. It can be null if this is being used to create a Model to pass to a
PipelineModel
which has its own Role field. (default: None) - predictor_cls (callable[string, sagemaker.session.Session]) – A function to call to create
a predictor (default: None). If not None,
deploy
will return the result of invoking this function on the created endpoint name. - env (dict[str, str]) – Environment variables to run with
image
when hosted in SageMaker (default: None). - name (str) – The model name. If None, a default model name will be selected on each
deploy
. - vpc_config (dict[str, list[str]]) – The VpcConfig set on the model (default: None) * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
- sagemaker_session (sagemaker.session.Session) – A SageMaker Session object, used for SageMaker interactions (default: None). If not specified, one is created using the default AWS configuration chain.
-
prepare_container_def
(instance_type, accelerator_type=None)¶ Return a dict created by
sagemaker.container_def()
for deploying this model to a specified instance type.Subclasses can override this to provide custom container definitions for deployment to a specific instance type. Called by
deploy()
.Parameters: Returns: A container definition object usable with the CreateModel API.
Return type:
-
enable_network_isolation
()¶ Whether to enable network isolation when creating this Model
Returns: If network isolation should be enabled or not. Return type: bool
-
compile
(target_instance_family, input_shape, output_path, role, tags=None, job_name=None, compile_max_run=300, framework=None, framework_version=None)¶ Compile this
Model
with SageMaker Neo.Parameters: - target_instance_family (str) – Identifies the device that you want to run your model after compilation, for example: ml_c5. Allowed strings are: ml_c5, ml_m5, ml_c4, ml_m4, jetsontx1, jetsontx2, ml_p2, ml_p3, deeplens, rasp3b
- input_shape (dict) – Specifies the name and shape of the expected inputs for your trained model in json dictionary form, for example: {‘data’:[1,3,1024,1024]}, or {‘var1’: [1,1,28,28], ‘var2’:[1,1,28,28]}
- output_path (str) – Specifies where to store the compiled model
- role (str) – Execution role
- tags (list[dict]) – List of tags for labeling a compilation job. For more, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
- job_name (str) – The name of the compilation job
- compile_max_run (int) – Timeout in seconds for compilation (default: 3 * 60). After this amount of time Amazon SageMaker Neo terminates the compilation job regardless of its current status.
- framework (str) – The framework that is used to train the original model. Allowed values: ‘mxnet’, ‘tensorflow’, ‘pytorch’, ‘onnx’, ‘xgboost’
- framework_version (str) –
Returns: A SageMaker
Model
object. SeeModel()
for full details.Return type:
-
deploy
(initial_instance_count, instance_type, accelerator_type=None, endpoint_name=None, update_endpoint=False, tags=None, kms_key=None)¶ Deploy this
Model
to anEndpoint
and optionally return aPredictor
.Create a SageMaker
Model
andEndpointConfig
, and deploy anEndpoint
from thisModel
. Ifself.predictor_cls
is not None, this method returns a the result of invokingself.predictor_cls
on the created endpoint name.The name of the created model is accessible in the
name
field of thisModel
after deploy returnsThe name of the created endpoint is accessible in the
endpoint_name
field of thisModel
after deploy returns.Parameters: - instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’.
- initial_instance_count (int) – The initial number of instances to run in the
Endpoint
created from thisModel
. - accelerator_type (str) – Type of Elastic Inference accelerator to deploy this model for model loading and inference, for example, ‘ml.eia1.medium’. If not specified, no Elastic Inference accelerator will be attached to the endpoint. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
- endpoint_name (str) – The name of the endpoint to create (default: None). If not specified, a unique endpoint name will be created.
- update_endpoint (bool) – Flag to update the model in an existing Amazon SageMaker endpoint. If True, this will deploy a new EndpointConfig to an already existing endpoint and delete resources corresponding to the previous EndpointConfig. If False, a new endpoint will be created. Default: False
- tags (List[dict[str, str]]) – The list of tags to attach to this specific endpoint.
- kms_key (str) – The ARN of the KMS key that is used to encrypt the data on the storage volume attached to the instance hosting the endpoint.
Returns: - Invocation of
self.predictor_cls
on the created endpoint name, if
self.predictor_cls
is not None. Otherwise, return None.
Return type: callable[string, sagemaker.session.Session] or None
-
transformer
(instance_count, instance_type, strategy=None, assemble_with=None, output_path=None, output_kms_key=None, accept=None, env=None, max_concurrent_transforms=None, max_payload=None, tags=None, volume_kms_key=None)¶ Return a
Transformer
that uses this Model.Parameters: - instance_count (int) – Number of EC2 instances to use.
- instance_type (str) – Type of EC2 instance to use, for example, ‘ml.c4.xlarge’.
- strategy (str) – The strategy used to decide how to batch records in a single request (default: None). Valid values: ‘MULTI_RECORD’ and ‘SINGLE_RECORD’.
- assemble_with (str) – How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.
- output_path (str) – S3 location for saving the transform result. If not specified, results are stored to a default bucket.
- output_kms_key (str) – Optional. KMS key ID for encrypting the transform output (default: None).
- accept (str) – The content type accepted by the endpoint deployed during the transform job.
- env (dict) – Environment variables to be set for use during the transform job (default: None).
- max_concurrent_transforms (int) – The maximum number of HTTP requests to be made to each individual transform container at one time.
- max_payload (int) – Maximum size of the payload in a single HTTP request to the container in MB.
- tags (list[dict]) – List of tags for labeling a transform job. If none specified, then the tags used for the training job are used for the transform job.
- volume_kms_key (str) – Optional. KMS key ID for encrypting the volume attached to the ML compute instance (default: None).
-
delete_model
()¶ Delete an Amazon SageMaker Model.
Raises: ValueError
– if the model is not created yet.
- model_data (str) – The S3 location of a SageMaker model data