XGBoost Classes for Open Source Version

The Amazon SageMaker XGBoost open source framework algorithm.

class sagemaker.xgboost.estimator.XGBoost(entry_point, framework_version, source_dir=None, hyperparameters=None, py_version='py3', image_name=None, **kwargs)

Bases: sagemaker.estimator.Framework

Handle end-to-end training and deployment of XGBoost booster training or training using customer provided XGBoost entry point script.

This Estimator executes an XGBoost based SageMaker Training Job. The managed XGBoost environment is an Amazon-built Docker container thatexecutes functions defined in the supplied entry_point Python script.

Training is started by calling fit() on this Estimator. After training is complete, calling deploy() creates a hosted SageMaker endpoint and returns an XGBoostPredictor instance that can be used to perform inference against the hosted model.

Technical documentation on preparing XGBoost scripts for SageMaker training and using the XGBoost Estimator is available on the project home-page: https://github.com/aws/sagemaker-python-sdk

Parameters:
  • entry_point (str) – Path (absolute or relative) to the Python source file which should be executed as the entry point to training. This should be compatible with either Python 2.7 or Python 3.5.
  • framework_version (str) – XGBoost version you want to use for executing your model training code. List of supported versions https://github.com/aws/sagemaker-python-sdk#xgboost-sagemaker-estimators
  • source_dir (str) – Path (absolute or relative) to a directory with any other training source code dependencies aside from the entry point file (default: None). Structure within this directory are preserved when training on Amazon SageMaker.
  • hyperparameters (dict) – Hyperparameters that will be used for training (default: None). The hyperparameters are made accessible as a dict[str, str] to the training code on SageMaker. For convenience, this accepts other types for keys and values, but str() will be called to convert them before training.
  • py_version (str) – Python version you want to use for executing your model training code (default: ‘py3’). One of ‘py2’ or ‘py3’.
  • image_name (str) –

    If specified, the estimator will use this image for training and hosting, instead of selecting the appropriate SageMaker official image based on framework_version and py_version. It can be an ECR url or dockerhub image and tag. .. rubric:: Examples

    123.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0 custom-image:latest.

  • **kwargs – Additional kwargs passed to the Framework constructor.

Tip

You can find additional parameters for initializing this class at Framework and EstimatorBase.

create_model(model_server_workers=None, role=None, vpc_config_override='VPC_CONFIG_DEFAULT', **kwargs)

Create a SageMaker XGBoostModel object that can be deployed to an Endpoint.

Parameters:
  • role (str) – The ExecutionRoleArn IAM Role ARN for the Model, which is also used during transform jobs. If not specified, the role from the Estimator will be used.
  • model_server_workers (int) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
  • vpc_config_override (dict[str, list[str]]) – Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
  • **kwargs – Passed to initialization of XGBoostModel.
Returns:

A SageMaker XGBoostModel object.

See XGBoostModel() for full details.

Return type:

sagemaker.xgboost.model.XGBoostModel

classmethod attach(training_job_name, sagemaker_session=None, model_channel_name='model')

Attach to an existing training job.

Create an Estimator bound to an existing training job, each subclass is responsible to implement _prepare_init_params_from_job_description() as this method delegates the actual conversion of a training job description to the arguments that the class constructor expects. After attaching, if the training job has a Complete status, it can be deploy() ed to create a SageMaker Endpoint and return a Predictor.

If the training job is in progress, attach will block and display log messages from the training job, until the training job completes.

Examples

>>> my_estimator.fit(wait=False)
>>> training_job_name = my_estimator.latest_training_job.name
Later on:
>>> attached_estimator = Estimator.attach(training_job_name)
>>> attached_estimator.deploy()
Parameters:
  • training_job_name (str) – The name of the training job to attach to.
  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.
  • model_channel_name (str) – Name of the channel where pre-trained model data will be downloaded (default: ‘model’). If no channel with the same name exists in the training job, this option will be ignored.
Returns:

Instance of the calling Estimator Class with the attached training job.

class sagemaker.xgboost.model.XGBoostModel(model_data, role, entry_point, framework_version, image=None, py_version='py3', predictor_cls=<class 'sagemaker.xgboost.model.XGBoostPredictor'>, model_server_workers=None, **kwargs)

Bases: sagemaker.model.FrameworkModel

An XGBoost SageMaker Model that can be deployed to a SageMaker Endpoint.

Initialize an XGBoostModel.

Parameters:
  • model_data (str) – The S3 location of a SageMaker model data .tar.gz file.
  • role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.
  • entry_point (str) – Path (absolute or relative) to the Python source file which should be executed as the entry point to model hosting. This should be compatible with either Python 2.7 or Python 3.5.
  • image (str) – A Docker image URI (default: None). If not specified, a default image for XGBoos will be used.
  • py_version (str) – Python version you want to use for executing your model training code (default: ‘py2’).
  • framework_version (str) – XGBoost version you want to use for executing your model training code.
  • predictor_cls (callable[str, sagemaker.session.Session]) – A function to call to create a predictor with an endpoint name and SageMaker Session. If specified, deploy() returns the result of invoking this function on the created endpoint name.
  • model_server_workers (int) – Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
  • **kwargs – Keyword arguments passed to the FrameworkModel initializer.

Tip

You can find additional parameters for initializing this class at FrameworkModel and Model.

prepare_container_def(instance_type, accelerator_type=None)

Return a container definition with framework configuration set in model environment variables.

Parameters:
  • instance_type (str) – The EC2 instance type to deploy this Model to. For example, ‘ml.m5.xlarge’.
  • accelerator_type (str) – The Elastic Inference accelerator type to deploy to the
  • for loading and making inferences to the model. For example, (instance) – ‘ml.eia1.medium’.
  • Note – accelerator types are not supported by XGBoostModel.
Returns:

A container definition object usable with the CreateModel API.

Return type:

dict[str, str]

serving_image_uri(region_name, instance_type)

Create a URI for the serving image.

Parameters:
  • region_name (str) – AWS region where the image is uploaded.
  • instance_type (str) – SageMaker instance type. Used to determine device type (cpu/gpu/family-specific optimized).
Returns:

The appropriate image URI based on the given parameters.

Return type:

str

class sagemaker.xgboost.model.XGBoostPredictor(endpoint_name, sagemaker_session=None)

Bases: sagemaker.predictor.RealTimePredictor

A RealTimePredictor for inference against XGBoost Endpoints.

This is able to serialize Python lists, dictionaries, and numpy arrays to xgb.DMatrix
for XGBoost inference.

Initialize an XGBoostPredictor.

Parameters:
  • endpoint_name (str) – The name of the endpoint to perform inference on.
  • sagemaker_session (sagemaker.session.Session) – Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.