-
Notifications
You must be signed in to change notification settings - Fork 41
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Expected behavior
If the eval metric has is_higher_better set to True then the objective should be maximized.
Environment
- Optuna version:
- Optuna Integration version:
- Python version:
- OS:
- (Optional) Other libraries and their versions:
Error messages, stack traces, or logs
def higher_is_better(self) -> bool:
metric_name = self.lgbm_params.get("metric", "binary_logloss")
return metric_name in ("auc", "auc_mu", "ndcg", "map", "average_precision")This code is totally incorrect if someone is using a custom evaluation metric.
I only found out about this bug after wasting several hours tuning and the best parameters that were returned being obviously nonsense.
I then tried explicitly creating a study with direction set to 'maximize' and at least hit a warning message.
Steps to reproduce
- Setup a study to maximize the optimization
- Create a custom
fevalcallback function - In the params explicitly set the metric to something other than ("auc", "auc_mu", "ndcg", "map", "average_precision")
- In the call to optuna_integration.lightgbm.train, set the
fevalcallable to the function created in step 2
def score_cb(preds, eval_data):
score = calculate_score(preds, eval_data.label)
return 'score', score, True
lgb_study = optuna.create_study(direction="maximize", study_name="LightGBM Auto Tune")
params = {
"objective": "regression",
"metric": "correlation",
"boosting_type": "gbdt",
}
model = opt_lgb.train(params, dtrain, study=lgb_study, num_boost_round=2000, valid_sets=[dtrain, dval], valid_names=["training", "validation"], feval=score_cb, callbacks=[opt_lgb.early_stopping(stopping_rounds=30), lgb.log_evaluation(1)])Additional context (optional)
No response
jameslamb
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working