You are on page 1of 2

XGBoost Parameters — xgboost 1.5.0-dev documentation https://xgboost.readthedocs.io/en/latest/parameter.

html

prune : prunes the splits where loss < min_split_loss (or gamma) and nodes
XGBoost ( index.html)
that have depth greater than max_depth .

TABLE OF CONTENTS In a distributed setting, the implicit updater sequence value would be adjusted to
grow_histmaker,prune by default, and you can set tree_method as hist to use
Installation Guide (install.html) grow_histmaker .

Building From Source (build.html) refresh_leaf [default=1]


Get Started with XGBoost This is a parameter of the refresh updater. When this flag is 1, tree leafs as well as
(get_started.html) tree nodes’ stats are updated. When it is 0, only node stats are updated.
XGBoost Tutorials
process_type [default= default ]
(tutorials/index.html)
A type of boosting process to run.
Frequently Asked Questions
(faq.html) Choices: default , update

XGBoost User Forum default : The normal boosting process which creates new trees.
(https://discuss.xgboost.ai)
update : Starts from an existing model and only updates its trees. In each
GPU Support (gpu/index.html) boosting iteration, a tree from the initial model is taken, a specified sequence
XGBoost Parameters of updaters is run for that tree, and a modified tree is added to the new model.
The new model would have either the same or smaller number of trees,
Prediction (prediction.html)
depending on the number of boosting iterations performed. Currently, the
XGBoost Tree Methods following built-in updaters could be meaningfully used with this process type:
(treemethod.html) refresh , prune . With process_type=update , one cannot use updaters that
Python Package create new trees.
(python/index.html)
grow_policy [default= depthwise ]
R Package (R-package/index.html)
Controls a way new nodes are added to the tree.
JVM Package (jvm/index.html)
Currently supported only if tree_method is set to hist or gpu_hist .
Ruby Package (https://github.com
/ankane/xgb)
Choices: depthwise , lossguide

Swift Package (https://github.com depthwise : split at nodes closest to the root.


/kongzii/SwiftXGBoost) lossguide : split at nodes with highest loss change.
Julia Package (julia.html)
max_leaves [default=0]
C Package (c.html)
Maximum number of nodes to be added. Only relevant when
C++ Interface (c%2B%2B.html) grow_policy=lossguide is set.

CLI Interface (cli.html) max_bin , [default=256]


Contribute to XGBoost Only used if tree_method is set to hist or gpu_hist .
(contrib/index.html)
Maximum number of discrete bins to bucket continuous features.

Increasing this number improves the optimality of splits at the cost of higher
computation time.

predictor , [default=``auto``]

The type of predictor algorithm to use. Provides the same results but allows the use
of GPU or CPU.

auto : Configure predictor based on heuristics.

cpu_predictor : Multicore CPU prediction algorithm.

gpu_predictor : Prediction using GPU. Used when tree_method is


gpu_hist . When predictor is set to default value auto , the gpu_hist tree
method is able to provide GPU based prediction without copying training data
to GPU memory. If gpu_predictor is explicitly specified, then all data is
copied into GPU, only recommended for performing prediction tasks.

num_parallel_tree , [default=1] - Number of parallel trees constructed during each


iteration. This option is used to support boosted random forest.

monotone_constraints

Constraint of variable monotonicity. See tutorial for more information.

5 of 11 8/7/21, 4:08 PM
XGBoost Parameters — xgboost 1.5.0-dev documentation https://xgboost.readthedocs.io/en/latest/parameter.html

interaction_constraints
XGBoost ( index.html)
Constraints for interaction representing permitted interactions. The constraints
TABLE OF CONTENTS must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]] , where each
inner list is a group of indices of features that are allowed to interact with each
Installation Guide (install.html) other. See tutorial for more information
Building From Source (build.html)

Get Started with XGBoost Additional parameters for hist and gpu_hist tree method
(get_started.html)
single_precision_histogram , [default=``false``]
XGBoost Tutorials
(tutorials/index.html) Use single precision to build histograms instead of double precision.
Frequently Asked Questions
(faq.html)
Additional parameters for gpu_hist tree method
XGBoost User Forum
(https://discuss.xgboost.ai) deterministic_histogram , [default=``true``]

GPU Support (gpu/index.html) Build histogram on GPU deterministically. Histogram building is not deterministic
XGBoost Parameters due to the non-associative aspect of floating point summation. We employ a pre-
rounding routine to mitigate the issue, which may lead to slightly lower accuracy. Set
Prediction (prediction.html)
to false to disable it.
XGBoost Tree Methods
(treemethod.html)

Python Package
Additional parameters for Dart Booster ( booster=dart )
(python/index.html)

R Package (R-package/index.html) Note


JVM Package (jvm/index.html)
Using predict() with DART booster

Ruby Package (https://github.com If the booster object is DART type, predict() will perform dropouts, i.e. only some of
/ankane/xgb) the trees will be evaluated. This will produce incorrect results if data is not the training
data. To obtain correct results on test sets, set ntree_limit to a nonzero value, e.g.
Swift Package (https://github.com
/kongzii/SwiftXGBoost)
preds = bst.predict(dtest, ntree_limit=num_round)
Julia Package (julia.html)

C Package (c.html)

C++ Interface (c%2B%2B.html)


sample_type [default= uniform ]
CLI Interface (cli.html)

Contribute to XGBoost Type of sampling algorithm.


(contrib/index.html) uniform : dropped trees are selected uniformly.

weighted : dropped trees are selected in proportion to weight.

normalize_type [default= tree ]

Type of normalization algorithm.

tree : new trees have the same weight of each of dropped trees.

Weight of new trees are 1 / (k + learning_rate) .

Dropped trees are scaled by a factor of k / (k + learning_rate) .

forest : new trees have the same weight of sum of dropped trees (forest).

Weight of new trees are 1 / (1 + learning_rate) .

Dropped trees are scaled by a factor of 1 / (1 + learning_rate) .

rate_drop [default=0.0]

Dropout rate (a fraction of previous trees to drop during the dropout).

range: [0.0, 1.0]

one_drop [default=0]

When this flag is enabled, at least one tree is always dropped during the dropout
(allows Binomial-plus-one or epsilon-dropout from the original DART paper).

6 of 11 8/7/21, 4:08 PM

You might also like