Professional Documents
Culture Documents
min_split_gain 1 • The hyperparameters are optimized by the • The proposed model’s performance is sig-
Bayesian optimization method, which uti- nificantly better than the optimized random
min_child_weight 1
lizes the Gaussian process as a surrogate forest and optimized XGBoost.
lambda_l1 0
Testing
model. • The reason of the observed results is the
Data
lambda_l2 0
• The proposed model achieves an accuracy high-efficiency parallelization, fast speed,
num_class 1
of 99.12 %, which is significantly better high model accuracy, and low FPR and FNR
Evaluation
than Random Forest (90.13 %) and XGBoost of the proposed model.
Optimized Mean LightGBM Bayesian Training
Data
Reports Model AUC Score Training Optimization
We tune 8 hyperparameters: bagging_fraction,
Refinement
lambda_l1, lambda_l2, feature_fraction, (83.54%)
Hyperparameter Optimization and Training max_depth, min_split_gain, min_child_weight,
and num_leaves.