The Data Science Lab

Multi-Class Classification Using PyTorch: Preparing Data

Dr. James McCaffrey of Microsoft Research kicks off a four-part series on multi-class classification, designed to predict a value that can be one of three or more possible discrete values.

The goal of a multi-class classification problem is to predict a value that can be one of three or more possible discrete values, such as "red," "yellow" or "green" for a traffic signal. This article is the first in a series of four articles that present a complete end-to-end production-quality example of multi-class classification using a PyTorch neural network. The example problem is to predict a college student's major ("finance," "geology" or "history") from their sex, number of units completed, home state and score on an admission test.

The process of creating a PyTorch neural network multi-class classifier consists of six steps:

  1. Prepare the training and test data
  2. Implement a Dataset object to serve up the data
  3. Design and implement a neural network
  4. Write code to train the network
  5. Write code to evaluate the model (the trained network)
  6. Write code to save and use the model to make predictions for new, previously unseen data

Each of the six steps is fairly complicated, and the six steps are tightly coupled which adds to the difficulty. This article covers the first two steps.

A good way to see where this series of articles is headed is to take a look at the screenshot of the demo program in Figure 1. The demo begins by creating Dataset and DataLoader objects which have been designed to work with the student data. Next, the demo creates a 6-(10-10)-3 deep neural network. The demo prepares training by setting up a loss function (cross entropy), a training optimizer function (stochastic gradient descent) and parameters for training (learning rate and max epochs).

Figure 1: Multi-Class Classification in Action
[Click on image for larger view.] Figure 1: Multi-Class Classification in Action

The demo trains the neural network for 1,000 epochs in batches of 10 items. An epoch is one complete pass through the training data. The training data has 200 items and the test data has 40 items. Therefore, one training epoch consists of processing 20 batches of 10 training items.

During training, the demo computes and displays a measure of the current error (also called loss) every 100 epochs. Because error slowly decreases, it appears that training is succeeding. Behind the scenes, the demo program saves checkpoint information after every 100 epochs so that if the training machine crashes, training can be resumed without having to start from the beginning.

After training the network, the demo program computes the classification accuracy of the model on the training data (163 out of 200 correct = 81.50 percent) and on the test data (31 out of 40 correct = 77.50 percent). Because the two accuracy values are similar, it is likely that model overfitting has not occurred. After evaluating the trained model, the demo program saves the model using the state dictionary approach, which is the most common of three standard techniques.

The demo concludes by using the trained model to make a prediction. The raw input is (sex = "M", units = 30.5, state = "oklahoma", score = 543). The raw input is normalized and encoded as (sex = -1, units = 0.305, state = 0, 0, 1, score = 0.543). The computed output vector is [0.7104, 0.2849, 0.0047]. These values represent the pseudo-probabilities of student majors "finance," "geology" and "history" respectively. Because the probability associated with "finance" is the largest, the predicted major is "finance."

This article assumes you have an intermediate or better familiarity with a C-family programming language, preferably Python, but doesn't assume you know very much about PyTorch. The complete source code for the demo program, and the two data files used, are available in the download that accompanies this article. All normal error checking code has been omitted to keep the main ideas as clear as possible.

To run the demo program, you must have Python and PyTorch installed on your machine. The demo programs were developed on Windows 10 using the Anaconda 2020.02 64-bit distribution (which contains Python 3.7.6) and PyTorch version 1.7.0 for CPU installed via pip. You can find detailed step-by-step installation instructions for this configuration in my blog post here.

The Student Data
The raw Student data is synthetic and was generated programmatically. There are a total of 240 data items, divided into a 200-item training dataset and a 40-item test dataset. The raw data looks like:

M  39.5  oklahoma  512  geology
F  27.5  nebraska  286  history
M  22.0  maryland  335  finance
F  50.0  nebraska  565  geology
. . .
M  59.5  oklahoma  694  history

Each line of tab-delimited data represents a hypothetical student at a hypothetical college. The first four values on each line are the predictors (often called features in machine learning terminology), and the fifth value is the dependent value to predict (often called the class or the label).

The first value on each line is the student's sex ("M" = male, "F" = female). The second value is the number of units completed by the student so far. The third value is the student's home state. For simplicity, there are just three states: "maryland," "nebraska" and "oklahoma." The fourth value is the student's test score on some sort of admission exam. The fifth value is the student's major. For simplicity there are just three majors to predict: "finance," "geology" and "history."

When using a PyTorch neural network, categorical predictor data must be encoded into a numeric form, and numeric predictor data should be normalized. For multi-class classification, the dependent value should be ordinal encoded.

The raw Student data was prepared in the following way. The gender values were encoded as "M" = -1 and "F" = +1. The units-completed values were normalized by dividing by 100. The student home state values were one-hot encoded as "maryland' = (1, 0, 0), "nebraska" = (0, 1, 0), "oklahoma" = (0, 0, 1). The test scores were normalized by dividing by 1000. The dependent values-to-predict, student majors, were ordinal encoded as "finance" = 0, "geology" = 1, "history" = 2.

Because the synthetic Student data is mixed numeric and categorical and has multiple dimensions, it's not possible to easily display the data in a graph. But you can get a good idea of what the data is like by examining the graph in Figure 2. It shows a 100-item subset of the raw data, with just the units-completed and test score predictor variables. Notice the data is not linearly separable so simple classification techniques such as multi-class logistic regression, decision trees and non-kernel multi-class support vector machines would likely create poor prediction models.

Figure 2: Partial Student Data
[Click on image for larger view.] Figure 2: Partial Student Data

In a non-demo scenario, data preparation can be very time-consuming. It's not uncommon for data preparation to take 80 percent or even more of the total time and effort required to create a prediction model. The demo system presented in this article performs all data preparation as a preprocessing step. An alternative approach is to programmatically perform data normalization and encoding on the fly.

The Overall Program Structure
The overall structure of the demo PyTorch multi-class classification program, with a few minor edits to save space, is shown in Listing 1. I indent my Python programs using two spaces rather than the more common four spaces.

Listing 1: The Structure of the Demo Program

# student_major.py
# PyTorch 1.7.0-CPU Anaconda3-2020.02
# Python 3.7.6 Windows 10 

import numpy as np
import time
import torch as T
device = T.device("cpu")

class StudentDataset(T.utils.data.Dataset):
  # sex units   state   test_score  major
  # -1  0.395   0 0 1   0.5120      1
  #  1  0.275   0 1 0   0.2860      2
  # -1  0.220   1 0 0   0.3350      0
  # sex: -1 = male, +1 = female
  # state: maryland, nebraska, oklahoma
  # major: finance, geology, history

  def __init__(self, src_file, n_rows=None): . . .
  def __len__(self): . . . 
  def __getitem__(self, idx): . . . 

# ----------------------------------------------------

def accuracy(model, ds): . . .

# ----------------------------------------------------

class Net(T.nn.Module):
  def __init__(self): . . .
  def forward(self, x): . . .

# ----------------------------------------------------

def main():
  # 0. get started
  print("Begin predict student major ")
  np.random.seed(1)
  T.manual_seed(1)

  # 1. create Dataset and DataLoader objects
  # 2. create neural network
  # 3. train network
  # 4. evaluate model
  # 5. save model
  # 6. make a prediction 
  print("End predict student major demo ")

if __name__== "__main__":
  main()

It's important to document the versions of Python and PyTorch being used because both systems are under continuous development. Dealing with versioning incompatibilities is a significant headache when working with PyTorch and is something you should not underestimate. The demo program imports the Python time module to timestamp saved checkpoints.

I prefer to use "T" as the top-level alias for the torch package. Most of my colleagues don't use a top-level alias and spell out "torch" dozens of times per program. Also, I use the full form of sub-packages rather than supplying aliases such as "import torch.nn.functional as functional". In my opinion, using the full form is easier to understand and less error-prone than using many aliases.

The demo program defines a program-scope CPU device object. I usually develop my PyTorch programs on a desktop CPU machine. After I get that version working, converting to a CUDA GPU system only requires changing the global device object to T.device("cuda") plus a minor amount of debugging.

The demo program defines just one helper method, accuracy(). All of the rest of the program control logic is contained in a main() function. It is possible to define other helper functions such as train_net(), evaluate_model() and save_model(), but in my opinion this modularization approach makes the program more difficult to understand rather than easier to understand.

Defining a Student Dataset Class
Serving up batches of data for training a network and evaluating the accuracy of a trained model is a bit trickier than you might expect if you're new to PyTorch. In the early days of PyTorch, the most common approach was to write completely custom code. You can still write one-off code for loading data, but now the most common approach is to implement a Dataset and DataLoader. Briefly, a Dataset object loads all training or test data into memory, and a DataLoader object serves up the data in batches.

You can think of a PyTorch Dataset as an interface that must be implemented. At a minimum, you must define an __init__() method which reads data from file into memory, a __len__() method which returns the total number of items in the source data, and a __getitem__() method which returns a single batch of data items. There are many design alternatives and no two Dataset class definitions will be the same.

A DataLoader object is instantiated by passing in a Dataset object. The DataLoader object can be iterated, serving up one batch of data at a time. Unlike the Dataset which must be implemented, a DataLoader is ready to use as-is.

The definition of class StudentDataset is shown in Listing 2. In most cases, the structures of the training and test data files are the same and you can use a single Dataset definition for both files. If the structures of your files are different, then you'd have to define two different Dataset classes, or parameterize the Dataset definition.

Listing 2: Class StudentDataset Definition

class StudentDataset(T.utils.data.Dataset):
  # sex units   state   test_score  major
  # -1  0.395   0 0 1   0.5120      1
  #  1  0.275   0 1 0   0.2860      2
  # -1  0.220   1 0 0   0.3350      0
  # sex: -1 = male, +1 = female
  # state: maryland, nebraska, oklahoma
  # major: finance, geology, history

  def __init__(self, src_file, n_rows=None):
    all_xy = np.loadtxt(src_file, max_rows=n_rows,
      usecols=[0,1,2,3,4,5,6], delimiter="\t",
      skiprows=0, comments="#", dtype=np.float32)

    n = len(all_xy)
    tmp_x = all_xy[0:n,0:6]  # all rows, cols [0,6)
    tmp_y = all_xy[0:n,6]    # 1-D required

    self.x_data = \
     T.tensor(tmp_x, dtype=T.float32).to(device)
    self.y_data = \
     T.tensor(tmp_y, dtype=T.int64).to(device) 

  def __len__(self):
    return len(self.x_data)

  def __getitem__(self, idx):
    preds = self.x_data[idx]
    trgts = self.y_data[idx] 
    sample = { 
      'predictors' : preds,
      'targets' : trgts 
    }
    return sample

The __init__() method begins by reading all relevant data from file into memory using the NumPy loadtxt() function:

all_xy = np.loadtxt(src_file, max_rows=n_rows,
  usecols=[0,1,2,3,4,5,6], delimiter="\t",
  skiprows=0, comments="#", dtype=np.float32)

The synthetic Student data contains both predictor values and labels-to-predict values in the same file, so both can be read at the same time. A slightly less efficient alternative is to read the predictor values with one call to loadtxt() and then read the values-to-predict with a second call.

Python has dozens of ways to read a text file into memory, but using loadtxt() is the technique I prefer. Some of my colleagues favor using the NumPy genfromtxt() or fromfile() functions, or the Pandas read_csv() function. The data is read into a NumPy matrix as float32 values.

After all the data has been read into memory as a NumPy matrix, the predictor rows and the labels-to-predict rows are extracted and then converted to PyTorch tensors:

n = len(all_xy)
tmp_x = all_xy[0:n,0:6]  # all rows, cols [0,6)
tmp_y = all_xy[0:n,6]    # 1-D required

self.x_data = \
 T.tensor(tmp_x, dtype=T.float32).to(device)
self.y_data = \
 T.tensor(tmp_y, dtype=T.int64).to(device)

The "[0:n,0:6]" syntax means "all rows, columns 0 to 5 inclusive." You have to be careful to avoid off-by-one indexing errors when working with PyTorch. PyTorch uses the "\" character for line continuation. The predictors are left as 32-bit values, but the class labels-to-predict are cast to a one-dimensional int64 tensor.

Many of the examples I've seen on the internet convert the input data to PyTorch tensors in the __getitem__() method rather than in the __init__() method. Because conversion to tensors is a relatively expensive operation, it's usually better to convert the data once in __init__() rather than repeatedly in the __getitem__() method.

At this point in the program execution, self.x_data is a 2-dimensional tensor matrix with six columns. In practice, you usually need to experiment a bit and examine objects with code like:

print("x_data is ")
print(self.x_data)
print(self.x_data.shape)
input()  # pause execution

Alternatively, if you're using a powerful IDE such as Visual Studio to write your code, you can set an execution breakpoint and examine variables.

The implementation of the Dataset __len__() method is simple:
def __len__(self):
  return len(self.x_data)

The Dataset object needs to be able to return the number of items it has so that the DataLoader object that uses the Dataset can determine when all data items have been processed once, and then start a new epoch. The Dataset n_rows parameter is passed to the loadtxt() max_rows parameter. If max_rows has value None, then loadtxt() will load all lines of the source data file. So, if you omit the n_rows parameter when instantiating a Dataset object, the default parameter value of None will be used, which will be passed to loadtxt() and all lines of data will be read into memory. Therefore, the __len__() method needs to return len(self.x_data), the actual number of lines of data read, rather than n_rows, which could be None. The moral of this story is that even simple-looking PyTorch statements must be thought through very carefully.

The Dataset __getitem__() method is defined as:

def __getitem__(self, idx):
  preds = self.x_data[idx]
  trgts = self.y_data[idx] 
  sample = { 
    'predictors' : preds,
    'targets' : trgts
  }
  return sample

The method returns one or more data items as a Dictionary object where a key of "predictors" gives the predictor x values and a key of "targets" gives the ordinal-encoded target values. It is common practice to name the parameter indicating which items to return as "idx" but that name is slightly misleading because in most cases idx is a Python list object such as [0, 2, 5] meaning rows [0], [2] and [5] are fetched. So, idx might better be named "indices" or "list_of_indices."

The return Dictionary value is created and populated in a single statement which might be a bit confusing if you're relatively new to Python. The demo code can be written in a less-terse fashion as:

sample = dict()   # or sample = {}
sample["predictors"] = preds
sample["targets"] = trgts

When writing PyTorch programs, there's always a tradeoff between short but sometimes difficult to read code, and clearer but longer code. Notice that the "predictors" and "targets" keys are magic strings in a sense and contribute to the tight coupling of the system. Alternative designs include defining the keys as global program-scope variables at the beginning of the program, or parameterizing the key names as strings, or using a NumPy array instead of a Dictionary object.

Testing the Dataset
It's good practice to test a Dataset and DataLoader before trying to use them to train a neural network. The short program in Listing 3 shows an example. The test program sets up a Dataset with just the first five items from the 200 normalized Student training data. Then the tester iterates twice through the five items, in batches of size two items. Therefore, each epoch serves up batches of 2, 2 and 1 items. See the screenshot in Figure 3.

Listing 3: Testing the Dataset using a DataLoader

# test_dataset.py
# PyTorch 1.7.0 CPU
# Python 3.7.6

import numpy as np
import torch as T
device = T.device("cpu")

class StudentDataset(T.utils.data.Dataset):
  # see Listing 1

T.manual_seed(6)

src = ".\\Data\\students_train.txt"
train_ds = StudentDataset(src, n_rows=5)
train_ldr = T.utils.data.DataLoader(train_ds,
  batch_size=2, shuffle=True)
for epoch in range(2):
  print("\n\n Epoch = " + str(epoch))
  for (bat_idx, batch) in enumerate(train_ldr):
    print("------------------------------")
    X = batch['predictors']
    Y = batch['targets']
    print("bat_idx = " + str(bat_idx))
    print(X)
    print(Y)

print("\nEnd test ")

The test program assumes the data files are in a sub-directory named Data. The PyTorch DataLoader class is defined in the torch.utils.data module. A DataLoader has 10 optional parameters but in most situations you pass only a (required) Dataset object, a batch size (the default is 1) and a shuffle (True or False, default is False) value. The shuffle parameter controls whether the data items should be served up in random order, typically during training, or in sequential order, typically during model evaluation.

Figure 3: Testing the Dataset using a DataLoader
[Click on image for larger view.] Figure 3: Testing the Dataset using a DataLoader

The core statement that uses the Dataset and DataLoader is:

for (bat_idx, batch) in enumerate(train_ldr):
  . . .

The enumerate() function is a built-in Python mechanism to walk through an iterable object, which includes DataLoader objects. The return value is a tuple where the first value is the 0-based batch index and the second value is a batch of data items stored as a Dictionary object. The short test program is just a beginning and you should test any Dataset object you define by iterating through all data items, with both the training and test data, and with the shuffle parameter set to both True and False.

Wrapping Up
The demo code presented in this article assumes that the neural multi-class classifier uses the CrossEntropyLoss() function to compute error during training. This loss function requires the targets to be ordinal encoded. The cross entropy plus ordinal encoding design is by far the most common approach for PyTorch multi-class classification. However, in the early days of neural networks, it was more common to use mean squared error as the loss function, combined with explicit one-hot encoding for the targets.

The example code presented in this article can be used as a template for most multi-class classification problems. One exception is a scenario where your training data is too large to fit entirely into memory. Fortunately, these situations are relatively rare. For huge data files, the most usual approach is to define a Dataset object where the __init__() method reads part of the huge file into a buffer, and then when all of the buffer data has been process by calls to the __getitem__() method, a program-defined reload() method fills the buffer with the next block of data.

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube