-
Notifications
You must be signed in to change notification settings - Fork 772
Description
Hi, thanks for the great package.
I would like to use the smoothing_rounds argument only for a subset of the features of my model, the rest of the variables should have smooting_rounds=0. This since I know which features are smooth and which features I want to model "un-smoothly".
I looked at the examples here https://interpret.ml/docs/python/examples/interpretable-regression-synthetic.html and here https://interpret.ml/docs/python/examples/custom-interactions.html.
Currently I am merging two models, one where I exclude all the non-smooth features, and one where I exclude all the smooth features (I do this using the argument exclude).
I follow the steps of the "custom interactions" example, adjusting the bin weights / scaling / adding intercepts.
I don't know the algorithms in detail: would the described model merge be enough or would one rather need an iterative procedure where mod_smooth and mod_non_smooth are trained every time with 1 boosting round and they are fitted using init_score using the values from the previous iterations? With such a procedure both the smooth and the non-smooth model would be always updated rather than only one model being fully trained and then the second model being trained using init_score.