The Data Science Lab

Binary Classification Using a scikit Neural Network

Machine learning with neural networks is sometimes said to be part art and part science. Dr. James McCaffrey of Microsoft Research teaches both with a full-code, step-by-step tutorial.

A binary classification problem is one where the goal is to predict the value of a variable where there are exactly two discrete possibilities. For example, you might want to predict the sex of a person (male = 0, female = 1) based on their age, state where they live, income and political leaning (conservative, moderate, liberal). Note that when there are three or more possible values to predict (for example, predict political leaning), the problem is called multi-class classification, which typically uses different algorithms than binary classification.

Arguably the most powerful binary classification technique is a neural network model. There are several tools and code libraries that you can use to create a neural network classifier. The scikit-learn library (also called scikit or sklearn) is based on the Python language and is one of the most popular.

A good way to see where this article is headed is to take a look at the screenshot in Figure 1. The demo program loads a 200-item set of training data and a 40-item set of test data into memory. Next, the demo creates and trains a neural network model using the MLPClassifier module ("multi-layer perceptron," an old term for a neural network) from the scikit library.

[Click on image for larger view.] Figure 1: Binary Classification Using a scikit Neural Network

After training, the model is applied to the training data and the test data. The model scores 93 percent accuracy (186 out of 200 correct) on the training data, and 82.50 percent accuracy (33 out of 40 correct) on the test data.

The demo concludes by predicting the sex of a person who is age 30, from Oklahoma, makes $40,000 per year and is a political moderate The prediction is [[0.9708 0.0292]]. These are pseudo-probabilities, and because the value at index [0] is largest, the predicted sex is class 0 = male.

This article assumes you have intermediate or better skill with a C-family programming language, but doesn't assume you know much about neural networks or the scikit library. The complete source code for the demo program is presented in this article and the accompanying file download. The source code and training and test data are also available online.

Installing the scikit Library
There are several ways to install the scikit library. I recommend installing the Anaconda Python distribution. Anaconda contains the scikit library, a core Python engine, plus more than 500 libraries that are (mostly) compatible with one another. I used Anaconda3-2022.10, which contains Python 3.9.13 and the scikit 1.0.2 version. The demo code runs on Windows 10 or 11.

Briefly, Anaconda is installed using a Windows self-extracting executable file. The setup process is mostly straightforward and takes about 15 minutes following step-by-step instructions. The instructions can be easily adapted for Anaconda3-2022.10.

There are more up-to-date versions of Anaconda / Python / scikit library available. But because the Python ecosystem has hundreds of libraries, if you install the most recent versions of these libraries, you run a greater risk of library incompatibilities -- a major headache when working with Python.

The Data
The data is artificial. There are 200 training items and 40 test items. The structure of data looks like:

 1   0.24   1 0 0   0.2950   0 0 1
 0   0.39   0 0 1   0.5120   0 1 0
 1   0.63   0 1 0   0.7580   1 0 0
 0   0.36   1 0 0   0.4450   0 1 0
 1   0.27   0 1 0   0.2860   0 0 1
. . .

The tab-delimited fields are sex (0 = male, 1 = female), age (divided by 100), state (Michigan = 100, Nebraska = 010, Oklahoma = 001), income (divided by $100,000) and political leaning (conservative = 100, moderate = 010, liberal = 001). For scikit neural network classification, the numeric predictors should all be normalized to approximately the same range -- typically 0.0 to 1.0 or -1.0 to +1.0 -- because normalizing prevents predictors with large magnitudes from overwhelming those with small magnitudes.

For categorical predictor variables, I recommend one-hot encoding. For example, if there were five states instead of just three, the states would be encoded as 10000, 01000, 00100, 00010, 00001. For binary predictor variables, such as is_citizen, you can encode using either zero-one encoding or minus-one-plus-one encoding. In spite of decades of research, there are some topics, such as binary predictor encoding, that are not well understood.

The Demo Program
The complete demo program is presented in Listing 1. Notepad is my preferred code editor but most of my colleagues use one of the many excellent code editors that are available for Python. I indent my Python program using two spaces rather than the more common four spaces.

The program imports the NumPy library, which contains numeric array functionality, and the MLPClassifier module, which contains neural network functionality. Notice the name of the root scikit module is sklearn rather than scikit.

import numpy as np 
from sklearn.neural_network import MLPClassifier
import warnings
warnings.filterwarnings('ignore')  # early-stop warnings

The demo specifies that no Python warnings should be displayed. I do this to keep the output tidy, but in a non-demo scenario you definitely want to see warning messages.

Listing 1: Complete Demo Program

# people_gender_nn_sckit.py

# predict sex (0 = male, 1 = female) 
# from age, state, income, politics

# Anaconda3-2022.10  Python 3.9.13  scikit 1.0.2
# Windows 10/11

import numpy as np 
from sklearn.neural_network import MLPClassifier
import warnings
warnings.filterwarnings('ignore')  # early-stop warnings

# ---------------------------------------------------------

def show_confusion(cm):
  dim = len(cm)
  mx = np.max(cm)             # largest count in cm
  wid = len(str(mx)) + 1      # width to print
  fmt = "%" + str(wid) + "d"  # like "%3d"
  for i in range(dim):
    print("actual   ", end="")
    print("%3d:" % i, end="")
    for j in range(dim):
      print(fmt % cm[i][j], end="")
    print("")
  print("------------")
  print("predicted    ", end="")
  for j in range(dim):
    print(fmt % j, end="")
  print("")

# ---------------------------------------------------------

def main():
  # 0. get ready
  print("\nBegin scikit neural network binary example ")
  print("Predict sex from age, State, income, politics ")
  np.random.seed(1)
  np.set_printoptions(precision=4, suppress=True)

  # 1. load data
  print("\nLoading data into memory ")
  train_file = ".\\Data\\people_train.txt"
  train_xy = np.loadtxt(train_file, usecols=range(0,9),
    delimiter="\t", comments="#", dtype=np.float32) 
  train_x = train_xy[:,1:9]
  train_y = train_xy[:,0].astype(np.int64)

  # load, two calls to loadtxt() technique
  test_file = ".\\Data\\people_test.txt"
  test_x = np.loadtxt(test_file, usecols=range(1,9),
    delimiter="\t", comments="#",  dtype=np.float32)
  test_y = np.loadtxt(test_file, usecols=0,
    delimiter="\t", comments="#",  dtype=np.int64)

  print("\nTraining data:")
  print(train_x[0:4])
  print(". . . \n")
  print(train_y[0:4])
  print(". . . ")

# ---------------------------------------------------------

  # 2. create network 
  # MLPClassifier(hidden_layer_sizes=(100,),
  #  activation='relu', *, solver='adam', alpha=0.0001,
  #  batch_size='auto', learning_rate='constant',
  #  learning_rate_init=0.001, power_t=0.5, max_iter=200,
  #  shuffle=True, random_state=None, tol=0.0001,
  #  verbose=False, warm_start=False, momentum=0.9,
  #  nesterovs_momentum=True, early_stopping=False,
  #  validation_fraction=0.1, beta_1=0.9, beta_2=0.999,
  #  epsilon=1e-08, n_iter_no_change=10, max_fun=15000)

  params = { 'hidden_layer_sizes' : [10,10],
    'activation' : 'tanh',
    'solver' : 'sgd',
    'alpha' : 0.001,
    'batch_size' : 10,
    'random_state' : 0,
    'tol' : 0.0001,
    'nesterovs_momentum' : False,
    'learning_rate' : 'constant',
    'learning_rate_init' : 0.01,
    'max_iter' : 500,
    'shuffle' : True,
    'n_iter_no_change' : 50,
    'verbose' : False }
       
  print("\nCreating 8-(10-10)-1 tanh neural network ")
  net = MLPClassifier(**params)

# ---------------------------------------------------------

  # 3. train
  print("\nTraining with bat sz = " + \
    str(params['batch_size']) + " lrn rate = " + \
    str(params['learning_rate_init']) + " ")
  print("Stop if no change " + \
    str(params['n_iter_no_change']) + " iterations ")
  net.fit(train_x, train_y)
  print("Done ")

# ---------------------------------------------------------

  # 4. evaluate model
  acc_train = net.score(train_x, train_y)
  print("\nAccuracy on train = %0.4f " % acc_train)
  acc_test = net.score(test_x, test_y)
  print("Accuracy on test = %0.4f " % acc_test)

  from sklearn.metrics import confusion_matrix
  y_predicteds = net.predict(test_x)
  cm = confusion_matrix(test_y, y_predicteds)
  print("\nConfusion matrix: \n")
  # print(cm)  # raw
  show_confusion(cm)  # custom formatted

  from sklearn.metrics import precision_score
  from sklearn.metrics import recall_score
  from sklearn.metrics import f1_score
  y_predicteds = net.predict(test_x)
  precision = precision_score(test_y, y_predicteds)
  print("\nPrecision on test = %0.4f " % precision)
  recall = recall_score(test_y, y_predicteds)
  print("Recall on test = %0.4f " % recall)
  f1 = f1_score(test_y, y_predicteds)
  print("F1 score on test = %0.4f " % f1)

# ---------------------------------------------------------

  # 5. use model
  print("\nSetting age = 30  Oklahoma  $40,000  moderate ")
  X = np.array([[0.30, 0,0,1, 0.4000, 0,1,0]],
    dtype=np.float32)

  probs = net.predict_proba(X)
  print("\nPrediction pseudo-probs: ")
  print(probs)

  sex = net.predict(X)
  print("\nPredicted class: ")
  print(sex)  # a vector with a single value
  if sex[0] == 0: print("male")
  elif sex[0] == 1: print("female")

# ---------------------------------------------------------
  
  # 6. TODO: save model using pickle
  print("\nEnd scikit binary neural network demo ")

if __name__ == "__main__":
  main()

All the program logic is contained in a main() function. The demo begins by setting the NumPy random seed:

def main():
  # 0. get ready
  print("Begin scikit neural network binary example ")
  print("Predict sex from age, State, income, politics ")
  np.random.seed(1)
  np.set_printoptions(precision=4, suppress=True)
 . . .

Technically, setting the random seed value isn't necessary, but doing so helps you to get reproducible results in most situations. The set_printoptions() function formats NumPy arrays to four decimals without using scientific notation.

Loading the Training and Test Data
The demo program loads the training data into memory using these statements:

  # 1. load data
  print("Loading data into memory ")
  train_file = ".\\Data\\people_train.txt"
  train_xy = np.loadtxt(train_file, usecols=range(0,9),
    delimiter="\t", comments="#", dtype=np.float32) 
  train_x = train_xy[:,1:9]
  train_y = train_xy[:,0].astype(np.int64)

This code assumes the data files are stored in a directory named Data. There are many ways to load data into memory. I prefer using the NumPy library loadtxt() function, but a common alternative is the Pandas library read_csv() function.

The code reads all 200 lines of training data (columns 0 to 8 inclusive) into a matrix named train_xy and then splits the data into a matrix of predictor values and a vector of target gender values. The colon syntax means "all rows." The target labels are converted from type float32 to int64.

The 40-item test data is read into memory using an alternate technique that calls loadtxt() twice:

  test_file = ".\\Data\\people_test.txt"
  test_x = np.loadtxt(test_file, usecols=range(1,9),
    delimiter="\t", comments="#",  dtype=np.float32)
  test_y = np.loadtxt(test_file, usecols=0,
    delimiter="\t", comments="#",  dtype=np.int64)

The demo program prints the first four training predictor items and the first four target gender values:

  print("Training data:")
  print(train_x[0:4])
  print(". . . ")
  print(train_y[0:4])
  print(". . . ")

In a non-demo scenario you might want to display all the training data and all the test data to verify the data has been read properly.

Creating the Neural Network Model
Creating the multi-class classification neural network model is simultaneously simple and complicated. First, the demo program sets up the network parameters in a Python Dictionary object like so:

  # 2. create network 
  params = { 'hidden_layer_sizes' : [10,10],
    'activation' : 'tanh', 'solver' : 'sgd',
    'alpha' : 0.001, 'batch_size' : 10,
    'random_state' : 0, 'tol' : 0.0001,
    'nesterovs_momentum' : False,
    'learning_rate' : 'constant',
    'learning_rate_init' : 0.01,
    'max_iter' : 500, 'shuffle' : True,
    'n_iter_no_change' : 50, 'verbose' : False }

After the parameters are set, they are fed to a neural network constructor:

  print("Creating 8-(10-10)-1 tanh neural network ")
  net = MLPClassifier(**params)

The ** syntax means to unpack the Dictionary values and pass them to the constructor. Like many scikit models, the MLPClassifier class has a lot of parameters and default values. The signature is:

MLPClassifier(hidden_layer_sizes=(100,),
 activation='relu', *, solver='adam', alpha=0.0001,
 batch_size='auto', learning_rate='constant',
 learning_rate_init=0.001, power_t=0.5, max_iter=200,
 shuffle=True, random_state=None, tol=0.0001,
 verbose=False, warm_start=False, momentum=0.9,
 nesterovs_momentum=True, early_stopping=False,
 validation_fraction=0.1, beta_1=0.9, beta_2=0.999,
 epsilon=1e-08, n_iter_no_change=10, max_fun=15000)

When working with scikit, you'll spend most of your time reading the documentation and trying to figure out what each parameter does. The MLPClassifier class is especially complex because many of the parameters interact with each other.

Your first parameter decision is the solver to use for training the network. Your choices are 'adam', 'sgd', or 'lbfgs'. I recommend 'sgd' for most problems, even though 'adam' is the default. The 'adam' solver is essentially a sophisticated version of 'sgd'. The 'lbfgs' solver works in a completely different way from 'adam' and 'sgd'.

Your next parameter decision is the number of hidden layers and the number of processing nodes in each layer. The demo uses two hidden layers with 10 nodes each. More layers and more nodes are not always better, so you must experiment. The default is one hidden layer with 100 nodes.

Your next decision is hidden node activation. Your choices are 'identity', 'logistic', 'tanh', 'relu'. If you use an 'sgd' solver' I suggest using 'tanh' activation. If you use an 'adam' solver', I suggest using 'relu' activation. The 'identity' and 'logistic' hidden node activation are rarely used.

Your next set of decisions are related to the training learning rate. The demo uses the 'constant' rate type. Alternatives are 'invscaling' and 'adaptive'. These are very complicated and I don't recommend using them. If you use a 'constant' learning rate type, you specify that rate using the learning_rate_init parameter. This value often has a huge effect on the performance of the resulting neural network model. Typical values to experiment with are 0.001, 0.01, 0.05 and 0.10.

Your next decision is the batch_size parameter. The demo uses 10. I recommend that your batch size evenly divides the number of training items so that all batches of training data have the same size. Because the demo has 200 training items, each batch will have 200 / 10 = 20 data items.

Your next parameter decision is whether or not to use nesterovs_momentum. The default value is True, but I recommend setting to False. Momentum is an old technique that was designed primarily to speed up training. But in my opinion the advantage gained by using momentum is usually outweighed by having to experiment with yet another parameter value, the momentum parameter.

Your next parameter decision is the alpha value. The alpha parameter controls what is called L2 regularization. Regularization shrinks the weights and biases of neural network to prevent them from becoming huge, which in turn causes model overfitting. Overfitting means the model predicts well on the training data, but when presented with new, previously unseen test data, the model predicts poorly. The default value of alpha is 0.0001, but I recommend setting alpha to zero and only experimenting with alpha values if significant overfitting occurs.

Your next parameter decision is max_iter to set the maximum number of training iterations. This is strictly a matter of trial and error. The demo sets the verbose parameter to False, but setting it to True will allow you to monitor training and determine a good value for the max_iter parameter (when the loss value stops changing much).

Your last parameter decisions are n_iter_no_change and tol. The n_iter_no_change specifies that training should stop if there are a certain number of iterations where no improvement (decrease in the error/loss value) has been made. The tol ("tolerance") parameter specifies exactly what no improvement means.

To recap, the MLPClassifier has a large number of interacting parameters. There are essentially an infinite number of combinations of the values of the parameters so you must experiment using trial and error. With each neural network example you encounter, your intuition will grow, and you'll be able to zero-in on good parameters values more quickly. This is the reason that machine learning with neural networks is sometimes said to be part art and part science.

Training the Neural Network
After the neural network has been prepared, training is easy:

  # 3. train
  print("Training with bat sz = " + \
    str(params['batch_size']) + " lrn rate = " + \
    str(params['learning_rate_init']) + " ")
  print("Stop if no change " + \
    str(params['n_iter_no_change']) + " iterations ")
  net.fit(train_x, train_y)
  print("Done ")

The backslash character is used for Python line continuation. The fit() method requires a matrix of predictor values and a vector of target labels. There are no optional parameters for fit() so you don't have much to think about -- all the decisions are made when selecting the constructor parameters.

Evaluating the Trained Model
The demo computes the accuracy of the trained model like so:

  # 4. evaluate model
  acc_train = net.score(train_x, train_y)
  print("Accuracy on train = %0.4f " % acc_train)
  acc_test = net.score(test_x, test_y)
  print("Accuracy on test = %0.4f " % acc_test)

The score() function computes a simple accuracy, which is just the number of correct predictions divided by the total number of predictions. However, for classification problems you usually also want to know the accuracy of the model for each class label. The easiest way to do this is to use the scikit confusion matrix:

  from sklearn.metrics import confusion_matrix
  y_predicteds = net.predict(test_x)
  cm = confusion_matrix(test_y, y_predicteds)
  print("\nConfusion matrix: \n")
  # print(cm)  # raw
  show_confusion(cm)  # custom formatted

For the demo program, the result of displaying a raw confusion matrix is:

[[ 19  7 ]
 [  0 14 ]]

The raw confusion matrix is a bit difficult to interpret so I usually write a program-defined helper function named show_confusion() to add formatting labels. The output of show_confusion() is:

actual     0: 19  7
actual     1:  0 14
------------
predicted      0  1

The code for show_confusion is in Listing 1. A good model should have roughly similar accuracy values for all class labels. If any class label has a very low accuracy, you need to investigate.

For binary classification problems, it's standard practice to compute additional measures of accuracy: precision, recall and F1 score. The demo does so using these statements:

  from sklearn.metrics import precision_score
  from sklearn.metrics import recall_score
  from sklearn.metrics import f1_score
  y_predicteds = net.predict(test_x)
  precision = precision_score(test_y, y_predicteds)
  print("Precision on test = %0.4f " % precision)
  recall = recall_score(test_y, y_predicteds)
  print("Recall on test = %0.4f " % recall)
  f1 = f1_score(test_y, y_predicteds)
  print("F1 score on test = %0.4f " % f1)

It's easy to overthink precision and recall. It's best to interpret them as additional accuracy metrics and you should only be concerned when you see a very low value. Precision and recall are somewhat ambiguous because assigning class 0 and class 1 to outcomes is usually arbitrary. The F1 score is just the harmonic mean of precision and recall.

Using the Trained Model
The demo uses the trained model like so:

  # 5. use model
  print("Setting age = 30 Oklahoma $40,000 moderate ")
  X = np.array([[0.30, 0,0,1, 0.4000, 0,1,0]],
    dtype=np.float32)
  probs = net.predict_proba(X)
  print("Prediction pseudo-probs: ")
  print(probs)

Because the neural network model was trained using normalized and encoded data, the X-input must be normalized and encoded in the same way. Notice the double square brackets on the X-input. The predict_proba() method expects a matrix rather than a vector. The result of the proba() method ("probability array") is a vector of pseudo-probabilities that sum to 1. If the class-to-predict is ordinal encoded, the index of the largest value corresponds to the predicted class.

The demo concludes by predicting the political type directly by using the predict() method:

  sex = net.predict(X)
  print("Predicted class: ")
  print(sex)  # a vector with a single value
  if sex[0] == 0: print("male")
  elif sex[0] == 1: print("female")

The return result is an array with one value rather than a scalar value because the predict() method accepts a matrix of predictor values instead of a single vector of values.

Saving the Trained Model
The demo doesn't save the trained model. The most common way to save a trained naive Bayes classifier model is to use the Python pickle library ("pickle" means to preserve in English). For example:

  import pickle
  print("Saving binary classifier model ")
  path = ".\\Models\\gender_nn_model.pkl"
  pickle.dump(model, open(path, "wb"))

This code assumes there is a directory named Models. The saved model could be loaded and used from another program like so:

  # predict sex for unknown person
  # age = 40, Nebraska, $54,000 conservative
  X = np.array([[0.40, 0,1,0, 0.5400, 1,0,0]],
    dtype=np.float32)
  with open(path, 'rb') as f:
    loaded_model = pickle.load(f)
  pa = loaded_model.predict_proba(x)
  print(pa)  # pseudo-probabilities

There are several other ways to save and load a trained scikit model, but using the pickle library is simplest.

Wrapping Up
When using the scikit library for binary classification, the main alternative to the MLPClassifier neural network module is the scikit DecisionTreeClassifier module. Decision trees are useful for relatively small datasets that have a relatively simple underlying structure, and when the trained model must be easily interpretable. Neural networks are useful for large datasets with complex structures, but neural models are not easy to interpret. Because the scikit library is so easy to use, it's common to try both approaches and optionally combine the results.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube