Using GPU for LightGBM processing

Hi,

I have been running the EO-learn Land Cover Map Jupyter Notebook, which uses the LightGBM algorithm for creating the land cover map. As I have access to a machine with a dedicated GPU I thought it would be interesting run a test of this algorithm, first using the CPU only, secondly with a GPU included. After some tweaking the algorithm (https://lightgbm.readthedocs.io/en/latest/GPU-Performance.html), I expected a considerable speed-up with the GPU. However, not so much changed (4.41 min wall time for GPU vs. 6.46 min wall time for CPU), I expected more like 2-3x speed up.

As I not an expert on GPU tuning, or IT hardware in the first place, I am wondering if someone on this forum has experience with the use of GPUs for classifications, or could provide some information on what I did actually made sense.