Hugging Face¶
Hugging Face Estimator¶
- class sagemaker.huggingface.HuggingFace(py_version, entry_point, transformers_version=None, tensorflow_version=None, pytorch_version=None, source_dir=None, hyperparameters=None, image_uri=None, distribution=None, compiler_config=None, **kwargs)¶
Bases:
Framework
Handle training of custom HuggingFace code.
This estimator runs a Hugging Face training script in a SageMaker training environment.
The estimator initiates the SageMaker-managed Hugging Face environment by using the pre-built Hugging Face Docker container and runs the Hugging Face training script that user provides through the
entry_point
argument.After configuring the estimator class, use the class method
fit()
to start a training job.- Parameters
py_version (str) – Python version you want to use for executing your model training code. Defaults to
None
. Required unlessimage_uri
is provided. If using PyTorch, the current supported version ispy36
. If using TensorFlow, the current supported version ispy37
.entry_point (str or PipelineVariable) – Path (absolute or relative) to the Python source file which should be executed as the entry point to training. If
source_dir
is specified, thenentry_point
must point to a file located at the root ofsource_dir
.transformers_version (str) – Transformers version you want to use for executing your model training code. Defaults to
None
. Required unlessimage_uri
is provided. The current supported version is4.6.1
.tensorflow_version (str) – TensorFlow version you want to use for executing your model training code. Defaults to
None
. Required unlesspytorch_version
is provided. The current supported version is2.4.1
.pytorch_version (str) – PyTorch version you want to use for executing your model training code. Defaults to
None
. Required unlesstensorflow_version
is provided. The current supported versions are1.7.1
and1.6.0
.source_dir (str or PipelineVariable) – Path (absolute, relative or an S3 URI) to a directory with any other training source code dependencies aside from the entry point file (default: None). If
source_dir
is an S3 URI, it must point to a tar.gz file. Structure within this directory are preserved when training on Amazon SageMaker.hyperparameters (dict[str, str] or dict[str, PipelineVariable]) – Hyperparameters that will be used for training (default: None). The hyperparameters are made accessible as a dict[str, str] to the training code on SageMaker. For convenience, this accepts other types for keys and values, but
str()
will be called to convert them before training.image_uri (str or PipelineVariable) –
If specified, the estimator will use this image for training and hosting, instead of selecting the appropriate SageMaker official image based on framework_version and py_version. It can be an ECR url or dockerhub image and tag. .. rubric:: Examples
123412341234.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0
custom-image:latest
If
framework_version
orpy_version
areNone
, thenimage_uri
is required. If alsoNone
, then aValueError
will be raised.distribution (dict) –
A dictionary with information on how to run distributed training (default: None). Currently, the following are supported: distributed training with parameter servers, SageMaker Distributed (SMD) Data and Model Parallelism, and MPI. SMD Model Parallelism can only be used with MPI. To enable parameter server use the following setup:
{ "parameter_server": { "enabled": True } }
To enable MPI:
{ "mpi": { "enabled": True } }
To enable SMDistributed Data Parallel or Model Parallel:
{ "smdistributed": { "dataparallel": { "enabled": True }, "modelparallel": { "enabled": True, "parameters": {} } } }
To enable PyTorch DDP:
{ "pytorchddp": { "enabled": True } }
To learn more, see Distributed PyTorch Training.
To enable Torch Distributed:
This is available for general distributed training on GPU instances from PyTorch v1.13.1 and later.
{ "torch_distributed": { "enabled": True } }
This option also supports distributed training on Trn1. To learn more, see Distributed PyTorch Training on Trainium.
To enable distributed training with SageMaker Training Compiler for Hugging Face Transformers with PyTorch:
{ "pytorchxla": { "enabled": True } }
To learn more, see SageMaker Training Compiler in the Amazon SageMaker Developer Guide.
Note
When you use this PyTorch XLA option for distributed training strategy, you must add the
compiler_config
parameter and activate SageMaker Training Compiler.compiler_config (
TrainingCompilerConfig
) – Configures SageMaker Training Compiler to accelerate training.**kwargs – Additional kwargs passed to the
Framework
constructor.
Tip
You can find additional parameters for initializing this class at
Framework
andEstimatorBase
.- LAUNCH_PYTORCH_DDP_ENV_NAME = 'sagemaker_pytorch_ddp_enabled'¶
- LAUNCH_TORCH_DISTRIBUTED_ENV_NAME = 'sagemaker_torch_distributed_enabled'¶
- INSTANCE_TYPE_ENV_NAME = 'sagemaker_instance_type'¶
- hyperparameters()¶
Return hyperparameters used by your custom PyTorch code during model training.
- create_model(model_server_workers=None, role=None, vpc_config_override='VPC_CONFIG_DEFAULT', entry_point=None, source_dir=None, dependencies=None, **kwargs)¶
Create a SageMaker
HuggingFaceModel
object that can be deployed to anEndpoint
.- Parameters
model_server_workers (int) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
role (str) – The
ExecutionRoleArn
IAM Role ARN for theModel
, which is also used during transform jobs. If not specified, the role from the Estimator will be used.vpc_config_override (dict[str, list[str]]) – Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
entry_point (str) – Path (absolute or relative) to the local Python source file which should be executed as the entry point to training. If
source_dir
is specified, thenentry_point
must point to a file located at the root ofsource_dir
. Defaults to None.source_dir (str) – Path (absolute or relative) to a directory with any other serving source code dependencies aside from the entry point file. If not specified, the model source directory from training is used.
dependencies (list[str]) – A list of paths to directories (absolute or relative) with any additional libraries that will be exported to the container. If not specified, the dependencies from training are used. This is not supported with “local code” in Local Mode.
**kwargs – Additional kwargs passed to the
HuggingFaceModel
constructor.
- Returns
A SageMaker
HuggingFaceModel
object. SeeHuggingFaceModel()
for full details.- Return type
- uploaded_code: Optional[UploadedCode]¶
Hugging Face Training Compiler Configuration¶
- class sagemaker.huggingface.TrainingCompilerConfig(enabled=True, debug=False)¶
Bases:
TrainingCompilerConfig
The SageMaker Training Compiler configuration class.
This class initializes a
TrainingCompilerConfig
instance.Amazon SageMaker Training Compiler is a feature of SageMaker Training and speeds up training jobs by optimizing model execution graphs.
You can compile Hugging Face models by passing the object of this configuration class to the
compiler_config
parameter of theHuggingFace
estimator.- Parameters
enabled (bool or PipelineVariable) – Optional. Switch to enable SageMaker Training Compiler. The default is
True
.debug (bool or PipelineVariable) – Optional. Whether to dump detailed logs for debugging. This comes with a potential performance slowdown. The default is
False
.
Example: The following code shows the basic usage of the
sagemaker.huggingface.TrainingCompilerConfig()
class to run a HuggingFace training job with the compiler.from sagemaker.huggingface import HuggingFace, TrainingCompilerConfig huggingface_estimator=HuggingFace( ... compiler_config=TrainingCompilerConfig() )
See also
For more information about how to enable SageMaker Training Compiler for various training settings such as using TensorFlow-based models, PyTorch-based models, and distributed training, see Enable SageMaker Training Compiler in the Amazon SageMaker Training Compiler developer guide.
- SUPPORTED_INSTANCE_CLASS_PREFIXES = ['p3', 'p3dn', 'g4dn', 'p4d', 'g5']¶
- SUPPORTED_INSTANCE_TYPES_WITH_EFA = ['ml.g4dn.8xlarge', 'ml.g4dn.12xlarge', 'ml.g5.48xlarge', 'ml.p3dn.24xlarge', 'ml.p4d.24xlarge']¶
- classmethod validate(estimator)¶
Checks if SageMaker Training Compiler is configured correctly.
- Parameters
estimator (
sagemaker.huggingface.HuggingFace
) – An estimator object. If SageMaker Training Compiler is enabled, it will validate whether the estimator is configured to be compatible with Training Compiler.- Raises
ValueError – Raised if the requested configuration is not compatible with SageMaker Training Compiler.
Hugging Face Model¶
- class sagemaker.huggingface.model.HuggingFaceModel(role=None, model_data=None, entry_point=None, transformers_version=None, tensorflow_version=None, pytorch_version=None, py_version=None, image_uri=None, predictor_cls=<class 'sagemaker.huggingface.model.HuggingFacePredictor'>, model_server_workers=None, **kwargs)¶
Bases:
FrameworkModel
A Hugging Face SageMaker
Model
that can be deployed to a SageMakerEndpoint
.Initialize a HuggingFaceModel.
- Parameters
model_data (str or PipelineVariable) – The Amazon S3 location of a SageMaker model data
.tar.gz
file.role (str) – An AWS IAM role specified with either the name or full ARN. The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.
entry_point (str) – The absolute or relative path to the Python source file that should be executed as the entry point to model hosting. If
source_dir
is specified, thenentry_point
must point to a file located at the root ofsource_dir
. Defaults to None.transformers_version (str) – Transformers version you want to use for executing your model training code. Defaults to None. Required unless
image_uri
is provided.tensorflow_version (str) – TensorFlow version you want to use for executing your inference code. Defaults to
None
. Required unlesspytorch_version
is provided. The current supported version is2.4.1
.pytorch_version (str) – PyTorch version you want to use for executing your inference code. Defaults to
None
. Required unlesstensorflow_version
is provided. The current supported versions are1.7.1
and1.6.0
.py_version (str) – Python version you want to use for executing your model training code. Defaults to
None
. Required unlessimage_uri
is provided.image_uri (str or PipelineVariable) – A Docker image URI. Defaults to None. If not specified, a default image for PyTorch will be used. If
framework_version
orpy_version
areNone
, thenimage_uri
is required. If alsoNone
, then aValueError
will be raised.predictor_cls (callable[str, sagemaker.session.Session]) – A function to call to create a predictor with an endpoint name and SageMaker
Session
. If specified,deploy()
returns the result of invoking this function on the created endpoint name.model_server_workers (int or PipelineVariable) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
**kwargs – Keyword arguments passed to the superclass
FrameworkModel
and, subsequently, its superclassModel
.
Tip
You can find additional parameters for initializing this class at
FrameworkModel
andModel
.- deploy(initial_instance_count=None, instance_type=None, serializer=None, deserializer=None, accelerator_type=None, endpoint_name=None, tags=None, kms_key=None, wait=True, data_capture_config=None, async_inference_config=None, serverless_inference_config=None, volume_size=None, model_data_download_timeout=None, container_startup_health_check_timeout=None, inference_recommendation_id=None, explainer_config=None, **kwargs)¶
Deploy this
Model
to anEndpoint
and optionally return aPredictor
.Create a SageMaker
Model
andEndpointConfig
, and deploy anEndpoint
from thisModel
. Ifself.predictor_cls
is not None, this method returns a the result of invokingself.predictor_cls
on the created endpoint name.The name of the created model is accessible in the
name
field of thisModel
after deploy returnsThe name of the created endpoint is accessible in the
endpoint_name
field of thisModel
after deploy returns.- Parameters
initial_instance_count (int) – The initial number of instances to run in the
Endpoint
created from thisModel
. If not using serverless inference, then it need to be a number larger or equals to 1 (default: None)instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’, or ‘local’ for local mode. If not using serverless inference, then it is required to deploy a model. (default: None)
serializer (
BaseSerializer
) – A serializer object, used to encode data for an inference endpoint (default: None). Ifserializer
is not None, thenserializer
will override the default serializer. The default serializer is set by thepredictor_cls
.deserializer (
BaseDeserializer
) – A deserializer object, used to decode data from an inference endpoint (default: None). Ifdeserializer
is not None, thendeserializer
will override the default deserializer. The default deserializer is set by thepredictor_cls
.accelerator_type (str) – Type of Elastic Inference accelerator to deploy this model for model loading and inference, for example, ‘ml.eia1.medium’. If not specified, no Elastic Inference accelerator will be attached to the endpoint. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
endpoint_name (str) – The name of the endpoint to create (default: None). If not specified, a unique endpoint name will be created.
tags (Optional[Tags]) – The list of tags to attach to this specific endpoint.
kms_key (str) – The ARN of the KMS key that is used to encrypt the data on the storage volume attached to the instance hosting the endpoint.
wait (bool) – Whether the call should wait until the deployment of this model completes (default: True).
data_capture_config (sagemaker.model_monitor.DataCaptureConfig) – Specifies configuration related to Endpoint data capture for use with Amazon SageMaker Model Monitoring. Default: None.
async_inference_config (sagemaker.model_monitor.AsyncInferenceConfig) – Specifies configuration related to async endpoint. Use this configuration when trying to create async endpoint and make async inference. If empty config object passed through, will use default config to deploy async endpoint. Deploy a real-time endpoint if it’s None. (default: None)
serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig) – Specifies configuration related to serverless endpoint. Use this configuration when trying to create serverless endpoint and make serverless inference. If empty object passed through, will use pre-defined values in
ServerlessInferenceConfig
class to deploy serverless endpoint. Deploy an instance based endpoint if it’s None. (default: None)volume_size (int) – The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currenly only Amazon EBS gp2 storage volumes are supported.
model_data_download_timeout (int) – The timeout value, in seconds, to download and extract model data from Amazon S3 to the individual inference instance associated with this production variant.
container_startup_health_check_timeout (int) – The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check see: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html#your-algorithms-inference-algo-ping-requests
inference_recommendation_id (str) – The recommendation id which specifies the recommendation you picked from inference recommendation job results and would like to deploy the model and endpoint with recommended parameters.
explainer_config (sagemaker.explainer.ExplainerConfig) – Specifies online explainability configuration for use with Amazon SageMaker Clarify. (default: None)
- Raises
ValueError – If arguments combination check failed in these circumstances: - If no role is specified or - If serverless inference config is not specified and instance type and instance count are also not specified or - If a wrong type of object is provided as serverless inference config or async inference config
- Returns
- Invocation of
self.predictor_cls
on the created endpoint name, ifself.predictor_cls
is not None. Otherwise, return None.
- Return type
callable[string, sagemaker.session.Session] or None
- register(content_types=None, response_types=None, inference_instances=None, transform_instances=None, model_package_name=None, model_package_group_name=None, image_uri=None, model_metrics=None, metadata_properties=None, marketplace_cert=False, approval_status=None, description=None, drift_check_baselines=None, customer_metadata_properties=None, domain=None, sample_payload_url=None, task=None, framework=None, framework_version=None, nearest_model_name=None, data_input_configuration=None, skip_model_validation=None, source_uri=None, model_card=None)¶
Creates a model package for creating SageMaker models or listing on Marketplace.
- Parameters
content_types (list[str] or list[PipelineVariable]) – The supported MIME types for the input data.
response_types (list[str] or list[PipelineVariable]) – The supported MIME types for the output data.
inference_instances (list[str] or list[PipelineVariable]) – A list of the instance types that are used to generate inferences in real-time (default: None).
transform_instances (list[str] or list[PipelineVariable]) – A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed (default: None).
model_package_name (str or PipelineVariable) – Model Package name, exclusive to model_package_group_name, using model_package_name makes the Model Package un-versioned. Defaults to
None
.model_package_group_name (str or PipelineVariable) – Model Package Group name, exclusive to model_package_name, using model_package_group_name makes the Model Package versioned. Defaults to
None
.image_uri (str or PipelineVariable) – Inference image URI for the container. Model class’ self.image will be used if it is None. Defaults to
None
.model_metrics (ModelMetrics) – ModelMetrics object. Defaults to
None
.metadata_properties (MetadataProperties) – MetadataProperties object. Defaults to
None
.marketplace_cert (bool) – A boolean value indicating if the Model Package is certified for AWS Marketplace. Defaults to
False
.approval_status (str or PipelineVariable) – Model Approval Status, values can be “Approved”, “Rejected”, or “PendingManualApproval”. Defaults to
PendingManualApproval
.description (str) – Model Package description. Defaults to
None
.drift_check_baselines (DriftCheckBaselines) – DriftCheckBaselines object (default: None).
customer_metadata_properties (dict[str, str] or dict[str, PipelineVariable]) – A dictionary of key-value paired metadata properties (default: None).
domain (str or PipelineVariable) – Domain values can be “COMPUTER_VISION”, “NATURAL_LANGUAGE_PROCESSING”, “MACHINE_LEARNING” (default: None).
sample_payload_url (str or PipelineVariable) – The S3 path where the sample payload is stored (default: None).
task (str or PipelineVariable) – Task values which are supported by Inference Recommender are “FILL_MASK”, “IMAGE_CLASSIFICATION”, “OBJECT_DETECTION”, “TEXT_GENERATION”, “IMAGE_SEGMENTATION”, “CLASSIFICATION”, “REGRESSION”, “OTHER” (default: None).
framework (str or PipelineVariable) – Machine learning framework of the model package container image (default: None).
framework_version (str or PipelineVariable) – Framework version of the Model Package Container Image (default: None).
nearest_model_name (str or PipelineVariable) – Name of a pre-trained machine learning benchmarked by Amazon SageMaker Inference Recommender (default: None).
data_input_configuration (str or PipelineVariable) – Input object for the model (default: None).
skip_model_validation (str or PipelineVariable) – Indicates if you want to skip model validation. Values can be “All” or “None” (default: None).
source_uri (str or PipelineVariable) – The URI of the source for the model package (default: None).
model_card (ModeCard or ModelPackageModelCard) – document contains qualitative and quantitative information about a model (default: None).
- Returns
A sagemaker.model.ModelPackage instance.
- prepare_container_def(instance_type=None, accelerator_type=None, serverless_inference_config=None, inference_tool=None, accept_eula=None, model_reference_arn=None)¶
A container definition with framework configuration set in model environment variables.
- Parameters
instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.p2.xlarge’.
accelerator_type (str) – The Elastic Inference accelerator type to deploy to the instance for loading and making inferences to the model.
serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig) – Specifies configuration related to serverless endpoint. Instance type is not provided in serverless inference. So this is used to find image URIs.
inference_tool (str) – the tool that will be used to aid in the inference. Valid values: “neuron, neuronx, None” (default: None).
accept_eula (bool) – For models that require a Model Access Config, specify True or False to indicate whether model terms of use have been accepted. The accept_eula value must be explicitly defined as True in order to accept the end-user license agreement (EULA) that some models require. (Default: None).
- Returns
A container definition object usable with the CreateModel API.
- Return type
- serving_image_uri(region_name, instance_type=None, accelerator_type=None, serverless_inference_config=None, inference_tool=None)¶
Create a URI for the serving image.
- Parameters
region_name (str) – AWS region where the image is uploaded.
instance_type (str) – SageMaker instance type. Used to determine device type (cpu/gpu/family-specific optimized).
accelerator_type (str) – The Elastic Inference accelerator type to deploy to the instance for loading and making inferences to the model.
serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig) – Specifies configuration related to serverless endpoint. Instance type is not provided in serverless inference. So this is used used to determine device type.
inference_tool (str) – the tool that will be used to aid in the inference. Valid values: “neuron, neuronx, None” (default: None).
- Returns
The appropriate image URI based on the given parameters.
- Return type
Hugging Face Predictor¶
- class sagemaker.huggingface.model.HuggingFacePredictor(endpoint_name, sagemaker_session=None, serializer=<sagemaker.base_serializers.JSONSerializer object>, deserializer=<sagemaker.base_deserializers.JSONDeserializer object>, component_name=None)¶
Bases:
Predictor
A Predictor for inference against Hugging Face Endpoints.
This is able to serialize Python lists, dictionaries, and numpy arrays to multidimensional tensors for Hugging Face inference.
Initialize an
HuggingFacePredictor
.- Parameters
endpoint_name (str) – The name of the endpoint to perform inference on.
sagemaker_session (sagemaker.session.Session) – Session object that manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.
serializer (sagemaker.serializers.BaseSerializer) – Optional. Default serializes input data to .npy format. Handles lists and numpy arrays.
deserializer (sagemaker.deserializers.BaseDeserializer) – Optional. Default parses the response from .npy format to numpy array.
component_name (str) – Optional. Name of the Amazon SageMaker inference component corresponding to the predictor.
Hugging Face Processor¶
- class sagemaker.huggingface.processing.HuggingFaceProcessor(role=None, instance_count=None, instance_type=None, transformers_version=None, tensorflow_version=None, pytorch_version=None, py_version='py36', image_uri=None, command=None, volume_size_in_gb=30, volume_kms_key=None, output_kms_key=None, code_location=None, max_runtime_in_seconds=None, base_job_name=None, sagemaker_session=None, env=None, tags=None, network_config=None)¶
Bases:
FrameworkProcessor
Handles Amazon SageMaker processing tasks for jobs using HuggingFace containers.
This processor executes a Python script in a HuggingFace execution environment.
Unless
image_uri
is specified, the environment is an Amazon-built Docker container that executes functions defined in the suppliedcode
Python script.The arguments have the same meaning as in
FrameworkProcessor
, with the following exceptions.- Parameters
transformers_version (str) – Transformers version you want to use for executing your model training code. Defaults to
None
. Required unlessimage_uri
is provided. The current supported version is4.4.2
.tensorflow_version (str) – TensorFlow version you want to use for executing your model training code. Defaults to
None
. Required unlesspytorch_version
is provided. The current supported version is2.4.1
.pytorch_version (str) – PyTorch version you want to use for executing your model training code. Defaults to
None
. Required unlesstensorflow_version
is provided. The current supported version is1.6.0
.py_version (str) – Python version you want to use for executing your model training code. Defaults to
None
. Required unlessimage_uri
is provided. If using PyTorch, the current supported version ispy36
. If using TensorFlow, the current supported version ispy37
.role (Optional[Union[str, PipelineVariable]]) –
instance_count (Union[int, PipelineVariable]) –
instance_type (Union[str, PipelineVariable]) –
image_uri (Optional[Union[str, PipelineVariable]]) –
volume_size_in_gb (Union[int, PipelineVariable]) –
volume_kms_key (Optional[Union[str, PipelineVariable]]) –
output_kms_key (Optional[Union[str, PipelineVariable]]) –
max_runtime_in_seconds (Optional[Union[int, PipelineVariable]]) –
tags (Optional[Union[List[Dict[str, Union[str, PipelineVariable]]], Dict[str, Union[str, PipelineVariable]]]]) –
network_config (Optional[NetworkConfig]) –
Tip
You can find additional parameters for initializing this class at
FrameworkProcessor
.- estimator_cls¶
alias of
HuggingFace