The Data Science Lab

### Regression Using LightGBM

Dr. James McCaffrey of Microsoft Research presents a full-code, step-by-step tutorial on this powerful machine learning technique used to predict a single numeric value.

A regression problem is one where the goal is to predict a single numeric value. For example, you might want to predict a person's annual income from their sex, age, state of residence and political leaning. There are many machine learning techniques for regression. One of the most powerful techniques is to use the LightGBM (lightweight gradient boosting machine) system.

LightGBM is a sophisticated, open-source, tree-based system that was introduced in 2017. LightGBM can perform multi-class classification (predict one of three or more possible values), binary classification (predict one of two possible values), regression and ranking.

The best way to see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. LightGBM has three programming language interfaces -- C, Python and R. The demo program uses the Python language API. The demo begins by loading the data to analyze into memory. The data looks like:

```[ 1. 24.  0.  2.]  | 29500.0
[ 0. 39.  2.  1.]  | 51200.0
[ 1. 63.  1.  0.]  | 75800.0
. . .```

There are 200 items in the training dataset and 40 items in a test dataset. Each line represents a person. The predictor variables are sex, age, state and political leaning.

The demo creates and trains a LightGBM regression model. The trained model predicts the training data with 93.5 percent accuracy (187 out of 200 correct) and the test data with 72.5 percent accuracy (29 out of 40 correct). The demo defines a correct income prediction as one that's within 7 percent of the true income.

The demo concludes by predicting political leaning for a new, previously unseen person who is male, age 35, from Oklahoma, who is a political moderate. The predicted income is \$49,466.72.

This article assumes you have intermediate or better programming skill with a C-family language and a basic knowledge of decision tree terminology, but does not assume you know anything about LightGBM. The entire source code for the demo program is presented in this article, and is also available in the accompanying file download. You can also find the source code and data online.

The Data
The demo program uses a 240-item set of synthetic data. The raw data looks like:

```F  24  michigan  29500.00  liberal
M  39  oklahoma  51200.00  moderate
M  36  michigan  44500.00  moderate
. . .```

The fields are sex (M, F), age, state (Michigan, Nebraska, Oklahoma), income and political leaning (conservative, moderate, liberal). When using LightGBM, it's best to encode categorical predictors and labels using zero-based ordinal encoding. Unlike most other machine learning regression systems, when using LightGBM, numeric predictor and target variables can be used as-is. You can normalize numeric predictors using min-max, z-score, or divide-by-constant normalization, but normalization does not help LightGBM regression models.

You can encode your data in a preprocessing step, or you can encode programmatically while the data is being loaded into memory. The demo uses preprocessing. The comma-delimited encoded data looks like:

```1, 24, 0, 29500.00, 2
0, 39, 2, 51200.00, 1
1, 63, 1, 75800.00, 0
0, 36, 0, 44500.00, 1
1, 27, 1, 28600.00, 2
. . .```

The 240-item encoded data was split into a 200-item set of training data to create a prediction model and a 40-item set of test data to evaluate the model.

Installing Python and LightGBM
To use the Python language API for LightGBM, you must have Python installed on your machine. I strongly recommend using the Anaconda distribution of Python. The Anaconda distribution contains a Python interpreter and roughly 500 Python packages that are (mostly) compatible with each other. The demo uses version Anaconda3-2023.09-0 which contains Python version 3.11.5. To install Anaconda on a Windows platform, go here and find installer file Anaconda3-2023.09-0-Windows-x86_64.exe (or newer). Note: it is very easy to accidentally download a version that's not compatible with your machine.

Click on the .exe file link to download it to your machine. After the file is on your machine, double-click on the file to start the GUI-based installation process. In most scenarios, you can accept all the default installation values except the one which does not add Anaconda3 to your machine's PATH environment variable -- I recommend adding so that you don't have to manually edit your system environment variables, or enter long paths on the command line.

You can find detailed step-by-step instructions for installing Anaconda Python here.

You can verify your Anaconda Python installation by opening a command shell and typing the command "python" (without quotes). You should see a reply message that indicates the version of Python, followed by the Python triple greater-than prompt. You can type "exit()" to quit the interpreter.

If you ever need to uninstall Anaconda on a Windows machine, you can do so by going to the Add or Remove Programs setting, and clicking on the Uninstall option.

At the time this article was written, the Anaconda distribution does not contain the LightGBM system, and so it must be installed separately. I strongly recommend using the pip installer program (which is included with Anaconda). To install the most recent version of LightGBM over the Internet, open a command shell and type the command "pip install lightgbm." After a few seconds, you should see a message indicating success. To verify, open a command shell and type "python." At the Python prompt, type the command "import lightgbm as L" followed by the command "L.__version__" using double underscores. You should see the version of LightGBM that is installed.

If you ever need to uninstall LightGBM, you can do so by typing the command "pip uninstall lightgbm." I often use the local-install technique so that I can have a copy of LightGBM on my machine available even when I'm not connected to the internet.

The LightGBM Demo Program
The complete demo program is presented in Listing 1. The demo begins by loading the training data into memory:

```import numpy as np
import lightgbm as lgbm

def main():
np.random.seed(1)
train_file = ".\\Data\\people_train.txt"
test_file = ".\\Data\\people_test.txt"
. . .```

The demo does not use the NumPy random number generator directly, but it's good practice to set the generator seed value anyway in case the program is modified to use the RNG.

The demo assumes that the training and test data files are located in a subdirectory named Data. The comma-delimited data is loaded into NumPy arrays using the loadtxt() function. The predictor values in columns 0, 1, 2, 4 are loaded as type float64 and the target income values are loaded from column 3. Lines that begin with "#" are comments and are not loaded.

Listing 1: LightGBM Regression Demo Program

```# people_income_lgbm.py
# predict income from sex, age, State, politics

import numpy as np
import lightgbm as lgbm

# -----------------------------------------------------------

def accuracy(model, data_x, data_y, pct_close):
n = len(data_x)
n_correct = 0; n_wrong = 0
for i in range(n):
x = data_x[i].reshape(1, -1)
y = data_y[i]  # true income
pred = model.predict(x)  # predicted income []
if np.abs(pred[0] - y) < np.abs(pct_close * y):
n_correct += 1
else:
n_wrong += 1
return (n_correct * 1.0) / (n_correct + n_wrong)

# -----------------------------------------------------------

def accuracy_matrix(model, data_x, data_y,
pct_close, points):
n_intervals = len(points) - 1
result = np.zeros((n_intervals,2), dtype=np.int64)
# n_corrects in col [0], n_wrongs in col [1]
for i in range(len(data_x)):
x = data_x[i].reshape(1, -1)
y = data_y[i]                  # true income
pred = model.predict(x)        # predicted income []

interval = 0
for i in range(n_intervals):
if y >= points[i] and y < points[i+1]:
interval = i; break

if np.abs(pred[0] - y) < np.abs(pct_close * y):
result[interval][0] += 1
else:
result[interval][1] += 1
return result

# -----------------------------------------------------------

def show_acc_matrix(am, points):
h = "from      to         correct  wrong   count    accuracy"
print("  " + h)
for i in range(len(am)):
print("%10.2f" % points[i], end="")
print("%10.2f" % points[i+1], end="")
print("%8d" % am[i][0], end ="")
print("%8d" % am[i][1], end ="")
count = am[i][0] + am[i][1]
print("%8d" % count, end="")
if count == 0:
acc = 0.0
else:
acc = am[i][0] / count
print("%12.4f" % acc)

# -----------------------------------------------------------

def main():
# 0. get started
print("\nBegin People predict income using LightGBM ")
print("Predict income from sex, age, State, politics ")
np.random.seed(1)

# sex, age, State, income, politics
#  0    1     2       3       4
train_file = ".\\Data\\people_train.txt"
test_file = ".\\Data\\people_test.txt"

np.set_printoptions(precision=0, suppress=True)
print("\nFirst few train data: ")
for i in range(3):
print(x_train[i], end="")
print("  | " + str(y_train[i]))
print(". . . ")

# 2. create and train model
print("\nCreating and training LightGBM regression model ")
params = {
'objective': 'regression',  # not needed
'boosting_type': 'gbdt',  # default
'num_leaves': 31,  # default
'learning_rate': 0.05,  # default = 0.10
'feature_fraction': 1.0,  # default
'min_data_in_leaf': 2,  # default = 20
'random_state': 0,
'verbosity': -1
}
model = lgbm.LGBMRegressor(**params)  # scikit API
model.fit(x_train, y_train)
print("Done ")

# 3. evaluate model
print("\nEvaluating model accuracy (within 0.07) ")
acc_train = accuracy(model, x_train, y_train, 0.07)
print("accuracy on train data = %0.4f " % acc_train)
acc_test = accuracy(model, x_test, y_test, 0.07)
print("accuracy on test data = %0.4f " % acc_test)

inc_pts = \
[0.00, 25000.00, 50000.00, 75000.00, 100000.00]
am_train = \
accuracy_matrix(model, x_train, y_train, 0.07, inc_pts)
print("\nAccuracy on training data (within 0.07 of true):")
show_acc_matrix(am_train, inc_pts)

am_test = \
accuracy_matrix(model, x_test, y_test, 0.07, inc_pts)
print("\nAccuracy on test data (within 0.07 of true):")
show_acc_matrix(am_test, inc_pts)

# 4. use model
print("\nPredicting income for M 35 Oklahoma moderate ")
x = np.array([[0, 35, 2, 1]], dtype=np.float64)
y_pred = model.predict(x)
print("\nPredicted income = %0.2f " % y_pred[0])

print("\nEnd demo ")

# -----------------------------------------------------------

if __name__ == "__main__":
main()```

The test data is loaded into memory as arrays x_test and y_test in the same way as the training data. Next, the demo displays the first three lines of the training data as a sanity check:

```  np.set_printoptions(precision=0, suppress=True)
print("First few train data: ")
for i in range(3):
print(x_train[i], end="")
print("  | " + str(y_train[i]))
print(". . . ")```

In a non-demo scenario, you might want to display all the data.

Creating and Training the LightGBM Regression Model
The demo program creates and trains a LightGBM regression model using these statements:

```  # 2. create and train model
print("Creating and training LightGBM regression model ")
params = {
'objective': 'regression',  # not needed
'boosting_type': 'gbdt',  # default
'num_leaves': 31,  # default
'learning_rate': 0.05,  # default = 0.10
'feature_fraction': 1.0,  # default
'min_data_in_leaf': 2,  # default = 20
'random_state': 0,
'verbosity': -1
}
model = lgbm.LGBMRegressor(**params)
model.fit(x_train, y_train)```

The regression object is named model and is instantiated by setting up its parameters as a Python Dictionary collection named params. The main challenge when using LightGBM is wading through the dozens of parameters. The LGBMRegressor class/object has 19 parameters (num_leaves, max_depth and so on) and behind the scenes there are 57 Learning Control Parameters (min_data_in_leaf, bagging_fraction and so on), for a total of 76 parameters to deal with.

Documentation for the parameters can be found here and here.

Because the number of parameters is not manageable, you must rely on the default values and then try to find the handful of parameters that will create a good model. Based on my experience, the three most important parameters to explore and modify are n_estimators, min_data_in_leaf and learning_rate.

A LightGBM regression model is made up of n_estimators (default value is 100) relatively small decision trees that are called weak learners, or sometimes base learners. The weak trees are constructed sequentially where each tree uses gradients of the error from the previous tree. If the value of n_estimators is too small, then there aren't enough weak learners to create a model that predicts well (underfit). If the value of n_estimators is too large, then the model will overfit the training data and predict poorly on new, previously unseen data items.

The num_leaves parameter controls the overall size of the weak learner trees. The default value of 31 translates to a balanced tree that has five levels with 1, 2, 4, 8, 16 leaf nodes respectively. An unbalanced tree might have more levels. Weak learners that are too small might underfit, too large might overfit.

The max_depth parameter controls the number of levels that each weak learner has. The default value is -1 which means that there is no explicit limit. In most cases, the num_leaves parameter will prevent the depth of the weak learners from becoming too large.

The min_data_in_leaf parameter controls the size of the leaf nodes in the weak learners. The default value of 20 means that each leaf node must have at least 20 associated data items. For a relatively small set of training data, the default greatly reduces the number of leaf nodes. For the demo with 200 training items, there would be a maximum of 200 / 20 = 10 leaf nodes, which would likely underfit the model and lead to poor prediction accuracy. The demo modifies the value of min_data_in_leaf from 20 to 2, which gave much better results.

To recap, the n_estimators parameter controls the overall number of weak tree learners. The key parameters to control the size and shape of the weak learners are num_leaves, max_depth and min_data_in_leaf. Based on my experience, I typically experiment with n_estimators (the default value of 100 is often too large for small datasets) and min_data_in_leaf (the default of 20 is often too large for small datasets). I usually leave the num_leaves and max_depth parameter values at their default values of 31 and -1 (unlimited) respectively unless the model just doesn't predict well.

The demo modifies the learning_rate parameter from the default value of 0.10 to 0.05. The learning rate controls how much each weak learner tree changes from the previous learner. The effect of changing the learning_rate can vary quite a bit depending on the size and shape of the weak learners, but as a rule of thumb, smaller values work better for smaller datasets.

The demo modifies the value of the random_state parameter from its default value of None (Python's version of null) to 0. The None value means that results are not reproducible due to the random initialization component of the training process. Any value other than None will give (mostly) reproducible results, subject to multi-threading issues.

The demo modifies the value of the verbosity parameter from its default value of 1 to -1. The default value of 1 prints warning messages, regular error messages and fatal error messages. The demo value of -1 prints only fatal error messages. I did this only to keep the output small so I could take a screenshot. In a non-demo scenario you should leave the verbosity value at 1 in most situations.

After setting up the parameter values in a Dictionary collection, they are passed to the LGBMRegressor using the Python ** syntax which means unpack the values to parameters. Parameter values can be passed directly, for example model = lgbm.LGBMRegressor(n_estimators = 50, learning_rate = 0.05 and so on), but because there are so many parameters, this approach is rarely used.

The model is trained using the fit() method. Almost too easy because all the work is done when setting up the parameters.

Evaluating the Model
It's possible to evaluate a trained LightGBM regression model in several ways. The most basic approach is to compute prediction accuracy (number correct predictions divided by total number of predictions) on the training and test data. The demo program defines an accuracy() function where the key calling statements are:

```acc_train = accuracy(model, x_train, y_train, 0.07)
print("accuracy on train data = %0.4f " % acc_train)
acc_test = accuracy(model, x_test, y_test, 0.07)
print("accuracy on test data = %0.4f " % acc_test))```

The output of the simple accuracy() function is:

```accuracy on train data = 0.9350
accuracy on test data = 0.7250```

Recall that the demo accuracy() function defines a correct income prediction as one that's within 7 percent of the true income.

In many scenarios, a simple accuracy metric of the trained model computed on the training and test data is good enough. But in some scenarios, it's better to compute the accuracy of the trained model for various intervals of the dependent/target variable. The demo program defines an accuracy_matrix() function to compute accuracy for different intervals of target income, and a show_acc_matrix() to display a computed accuracy matrix.

The calling code for evaluating the training data is:

```inc_pts = [0.00, 25000.00, 50000.00, 75000.00, 100000.00]
am_train = \
accuracy_matrix(model, x_train, y_train, 0.07, inc_pts)
print("Accuracy on training data (within 0.07 of true):")
show_acc_matrix(am_train, inc_pts)```

The inc_pts ("income points") list defines four income intervals: \$0 to \$25,000, \$25,000 to \$50,000, \$50,000 to \$75,000, and \$75,000 to \$100,000. The output is:

```Accuracy on training data (within 0.07 of true):
from      to         correct  wrong   count    accuracy
0.00  25000.00       4       0       4      1.0000
25000.00  50000.00      74      10      84      0.8810
50000.00  75000.00      99       3     102      0.9706
75000.00 100000.00      10       0      10      1.0000```

The test data is evaluated similarly. The output shows that the model is quite accurate when predicting incomes that are greater than \$50,000 but not as accurate when predicting smaller incomes.

Using and Saving the LightGBM Regression Model
Using a trained LightGBM regression model is simple, subject to two minor syntax details. Example calling code is:

```# male, age 35, Oklahoma, moderate
x = np.array([[0, 35, 2, 1]], dtype=np.float64)
y_pred = model.predict(x)
print("Predicted income = %0.2f " % y_pred[0])```

Notice that the input x values have double square brackets to make the input a 2D matrix, which the LightGBM model predict() method requires. Alternatively, you can declare a 1D vector then reshape it to 2D:

```x = np.array([0, 35, 2, 1], dtype=np.float64)  # 1D
x = x.reshape(1, -1)  # 1 row, n cols 2D
pred = model.predict(x)```

The return value from the predict() method is an array rather than a scalar value. So, when the input is a single data item, and you want just the single predicted class, you can access the class at index [0] like so:

```Y_pred = model.predict(x)
print("Predicted income = %0.2f " % y_pred[0])```

Alternatively:

```y_pred = model.predict(x)  # array
y_pred = pred[0]           # scalar
print("Predicted income = %0.2f " % y_pred)```

The demo program does not save the trained LightGBM regression model. If you want to save a model you can do so in binary format using the Python pickle library. (In ordinary English, the word "pickle" means to preserve). The calling code would be:

```import pickle
print("Saving model ")
pth = ".\\Models\\regression_model.pkl"
with open(pth, "wb") as f:
pickle.dump(model, f)```

The code assumes the existence of a subdirectory named Models. The "wb" argument means "write to file as binary." The "pkl" extension is common but any extension name can be used.

A LightGBM model saved using pickle can be loaded into memory from another program and used like so:

```pth = ".\\Models\\regression_model.pkl"
with open(pth, "rb") as f:
x = np.array([[0, 35, 2, 1]], dtype=np.float64)
pred = model2.predict(x)```

There are other ways to save a trained LightGBM model, but the pickle approach is the easiest and the most common.

Wrapping Up
The LightGBM system was inspired by the XGBoost (extreme gradient boosting) system, which in turn was inspired by earlier tree boosting algorithms. The "boosting" term of the LightGBM name refers to the technique of combining several weak learners into one strong learning model. The "gradient" term refers to the technique of using the Calculus gradient of the error of a weak learner to construct the next weak learner in the model sequence. The "machine" term is an old way to indicate that a system is a machine learning one rather than a classical statistics one.

Arguably, the two most powerful techniques for regression on non-trivial datasets are neural networks and tree boosting. In several recent regression problem contests, LightGBM entries did very well. This may be due, in part, to the fact that LightGBM can be used out-of-the-box, which leaves a lot of time for hyperparameter fine-tuning. Creating a neural network regression model requires significantly more background knowledge and effort.

• ### Visual Studio Subscribers Now Get 'Visual Studio Live!' and Other Learning Discounts

Microsoft announced new learning benefits to subscribers of its flagship Visual Studio IDE, including discounts for Visual Studio Live! developer conferences and other educational opportunities.

• ### ChatGPT's Impact on App-Dev? GitHub Site Gives Instant Insights

GitHub just updated its online open data and insights platform that provides information on the global and local impact of developers, letting users instantly see their own software development trends.

• ### Microsoft Focuses on .NET Aspire, 'Modern SQL' with AI at Dev Conference

Microsoft yesterday announced its dev execs will focus on cloud-native development with .NET Aspire -- along with modern SQL with a touch of AI in Microsoft Fabric -- at a developer conference next month at the company's Redmond headquarters.

• ### Data Dimensionality Reduction Using a Neural Autoencoder with C#

Dr. James McCaffrey of Microsoft Research presents a full-code, step-by-step tutorial on creating an approximation of a dataset that has fewer columns.

• ### OpenSilver 3.0 Framework for .NET Gets UI by AI

A new AI-powered UI designer highlights the new release of OpenSilver 3.0, a free, open-source UI framework for building modern .NET web applications in C# and XAML, basically a reimplementation of Microsoft Silverlight that runs on current browsers via WebAssembly.