1 Introduction

In this chapter, we dive into artificial neural networks, one of the main drivers of artificial intelligence.

Neural networks are around since many decades. (Maybe) the first such model was built by Marvin Minsky in 1951. He called his algorithm SNARC (“stochastic neural-analog reinforcement calculator”). Since then, neural networks have gone through several stages of development. One of the milestones was the idea of Paul J. Werbos in 1974 [1] to efficiently calculate gradients in the optimization algorithm by an approach called “backpropagation”. Another milestone was the use of GPUs (graphics processing units) to greatly reduce calculation time.

Artificial neural nets are extremely versatile and powerful. They can be used to

  1. fit simple models like GLMs,
  2. learn interactions and non-linear effects in an automatic way (like tree-based methods),
  3. optimize general loss functions,
  4. fit data much larger than RAM (e.g. images),
  5. learn “online” (update the model with additional data),
  6. fit multiple response variables at the same time,
  7. model input of dimension higher than two (e.g. images, videos),
  8. model input of different input dimensions (e.g. text and images),
  9. fit data with sequential structure in both in- and output (e.g. a text translator),
  10. model data with spatial structure (images),
  11. fit models with many millions of parameters,
  12. do non-linear dimension reduction.

In this chapter, we will mainly deal with the first three aspects. Since a lot of new terms are being used, a small glossary can be found in Section “Neural Network Slang”.

2 Understanding Neural Nets

To learn how and why neural networks work, we will go through three steps - each illustrated on the diamonds data:

  • Step 1: Linear regression as neural net
  • Step 2: Hidden layers
  • Step 3: Activation functions

After this, we will be ready to build more complex models.

2.1 Step 1: Linear regression as neural net

Let us revisit the simple linear regression E(\text{price}) = \alpha + \beta \cdot \text{carat} calculated on the full diamonds data. In Chapter 1 we have found the solution \hat\alpha = -2256.36 and \hat \beta = 7756.43 by ordinary least-squares.

Above situation can be viewed as a neural network with

  • an input layer with two nodes (carat and the intercept called “bias unit” with value 1),
  • a “fully connected” (= “dense”) output layer with one node (price). Fully connected means that each node of a layer is a linear function of all node values of the previous layer. Each linear function has parameters or weights to be estimated, in our simple case just \alpha and \beta.

Visualized as a graph, the situation looks as follows.

Part of the figures were done with this cool webtool.

To gain confidence in neural nets, we first show that parameters estimated by a neural network are quite similar to the ones learned by linear least-squares. To do so, we will use Google’s TensorFlow with its convenient (functional) Keras interface.

2.1.1 Example: simple linear regression

library(ggplot2)
library(keras)

# Input layer: we have 1 covariate
input <- layer_input(shape = 1)

# Output layer connected to the input layer
output <- input |> 
  layer_dense(units = 1)

# Create and compile model
nn <- keras_model(inputs = input, outputs = output)
# summary(nn)

nn |> 
  compile(
    optimizer = optimizer_adam(learning_rate = 1),
    loss = "mse",
    metrics = metric_root_mean_squared_error()
  )

# Fit model - naive without validation
history <- nn |>
  fit(
    x = diamonds$carat,
    y = diamonds$price,
    epochs = 30,
    batch_size = 100
  )
## Epoch 1/30
## 540/540 - 2s - loss: 27156908.0000 - root_mean_squared_error: 5211.2290 - 2s/epoch - 4ms/step
## Epoch 2/30
## 540/540 - 1s - loss: 20296634.0000 - root_mean_squared_error: 4505.1787 - 1s/epoch - 2ms/step
## Epoch 3/30
## 540/540 - 1s - loss: 15536040.0000 - root_mean_squared_error: 3941.5784 - 1s/epoch - 2ms/step
## Epoch 4/30
## 540/540 - 1s - loss: 12372222.0000 - root_mean_squared_error: 3517.4170 - 1s/epoch - 2ms/step
## Epoch 5/30
## 540/540 - 1s - loss: 10371888.0000 - root_mean_squared_error: 3220.5415 - 1s/epoch - 3ms/step
## Epoch 6/30
## 540/540 - 1s - loss: 9147035.0000 - root_mean_squared_error: 3024.4065 - 1s/epoch - 2ms/step
## Epoch 7/30
## 540/540 - 1s - loss: 8345294.5000 - root_mean_squared_error: 2888.8223 - 1s/epoch - 2ms/step
## Epoch 8/30
## 540/540 - 1s - loss: 7709601.5000 - root_mean_squared_error: 2776.6169 - 1s/epoch - 2ms/step
## Epoch 9/30
## 540/540 - 1s - loss: 7096649.5000 - root_mean_squared_error: 2663.9539 - 1s/epoch - 2ms/step
## Epoch 10/30
## 540/540 - 1s - loss: 6473972.0000 - root_mean_squared_error: 2544.4001 - 1s/epoch - 2ms/step
## Epoch 11/30
## 540/540 - 1s - loss: 5855217.5000 - root_mean_squared_error: 2419.7556 - 1s/epoch - 3ms/step
## Epoch 12/30
## 540/540 - 1s - loss: 5262979.5000 - root_mean_squared_error: 2294.1184 - 1s/epoch - 2ms/step
## Epoch 13/30
## 540/540 - 1s - loss: 4717022.0000 - root_mean_squared_error: 2171.8706 - 1s/epoch - 2ms/step
## Epoch 14/30
## 540/540 - 1s - loss: 4230574.0000 - root_mean_squared_error: 2056.8359 - 1s/epoch - 2ms/step
## Epoch 15/30
## 540/540 - 1s - loss: 3809230.2500 - root_mean_squared_error: 1951.7250 - 1s/epoch - 2ms/step
## Epoch 16/30
## 540/540 - 1s - loss: 3452363.2500 - root_mean_squared_error: 1858.0536 - 1s/epoch - 2ms/step
## Epoch 17/30
## 540/540 - 1s - loss: 3160888.5000 - root_mean_squared_error: 1777.8888 - 1s/epoch - 2ms/step
## Epoch 18/30
## 540/540 - 1s - loss: 2931493.0000 - root_mean_squared_error: 1712.1603 - 1s/epoch - 2ms/step
## Epoch 19/30
## 540/540 - 1s - loss: 2756937.5000 - root_mean_squared_error: 1660.4028 - 1s/epoch - 2ms/step
## Epoch 20/30
## 540/540 - 1s - loss: 2630362.0000 - root_mean_squared_error: 1621.8391 - 1s/epoch - 2ms/step
## Epoch 21/30
## 540/540 - 1s - loss: 2541843.0000 - root_mean_squared_error: 1594.3158 - 1s/epoch - 2ms/step
## Epoch 22/30
## 540/540 - 1s - loss: 2482932.0000 - root_mean_squared_error: 1575.7323 - 1s/epoch - 2ms/step
## Epoch 23/30
## 540/540 - 1s - loss: 2446033.7500 - root_mean_squared_error: 1563.9801 - 1s/epoch - 2ms/step
## Epoch 24/30
## 540/540 - 1s - loss: 2424330.0000 - root_mean_squared_error: 1557.0260 - 1s/epoch - 2ms/step
## Epoch 25/30
## 540/540 - 1s - loss: 2411978.0000 - root_mean_squared_error: 1553.0544 - 1s/epoch - 2ms/step
## Epoch 26/30
## 540/540 - 1s - loss: 2405439.7500 - root_mean_squared_error: 1550.9480 - 1s/epoch - 2ms/step
## Epoch 27/30
## 540/540 - 1s - loss: 2401910.2500 - root_mean_squared_error: 1549.8097 - 1s/epoch - 2ms/step
## Epoch 28/30
## 540/540 - 1s - loss: 2399928.7500 - root_mean_squared_error: 1549.1703 - 1s/epoch - 2ms/step
## Epoch 29/30
## 540/540 - 1s - loss: 2399108.5000 - root_mean_squared_error: 1548.9058 - 1s/epoch - 3ms/step
## Epoch 30/30
## 540/540 - 1s - loss: 2398624.5000 - root_mean_squared_error: 1548.7494 - 1s/epoch - 2ms/step
plot(history, metrics = "root_mean_squared_error")

unlist(get_weights(nn))
## [1]  7719.257 -2220.454
# Plot effect of carat on average price
data.frame(carat = seq(0.3, 3, by = 0.1)) |> 
  transform(price = predict(nn, carat, verbose = 0)) |> 
ggplot(aes(x = carat, y = price)) +
  geom_line(color = "chartreuse4") +
  geom_point(color = "chartreuse4")

Comment: The solution of the simple neural network is indeed quite similar to the OLS solution.

2.1.2 The optimization algorithm

Neural nets are typically fitted by mini-batch gradient descent, using backpropagation to efficiently calculate gradients. It works as follows:

  1. Initiate the parameters with random values.
  2. Forward step: Use the parameters to predict all observations of a batch. A batch is a randomly selected subset of the full data set.
  3. Backpropagation step: Change the parameters in the right direction, making the average loss Q (e.g., the MSE) of the current batch smaller. This involves calculating derivatives (“gradients”) of Q with respect to all parameters. Backpropagation does so in a layer-per-layer fashion, making heavy use of the chain rule.
  4. Repeat Steps 2-3 until each observation appeared in a batch. This is called an epoch.
  5. Repeat Step 4 for multiple epochs until the parameter estimates stabilize or validation performance stops improving.

Gradient descent on batches of size 1 is called “stochastic gradient descent” (SGD).

2.2 Step 2: Hidden layers

Our first neural network above consisted of only an input layer and an output layer. By adding one or more hidden layers between in- and output, the network gains additional parameters, and thus more flexibility. The nodes of a hidden layer can be viewed as latent variables, representing the original covariates. The nodes of a hidden layer are sometimes called encoding. The closer a layer is to the output, the better its nodes are suitable to predict the response variable. In this way, a neural network finds the right transformations and interactions of its covariates in an automatic way. The only ingredients are a large data set and a flexible enough network “architecture” (number of layers, nodes per layer).

Neural nets with more than one hidden layer are called “deep neural nets”.

We will now add a hidden layer with five nodes v_1, \dots, v_5 to our simple linear regression network. The architecture looks as follows:

This network has 16 parameters. How much better than our simple network with just two parameters will it be?

2.2.1 Example: hidden layer

The following code is identical to the last one up to one extra line of code specifying the hidden layer.

library(ggplot2)
library(keras)

# Input layer: we have 1 covariate
input <- layer_input(shape = 1)

# One hidden layer
output <- input |>
  layer_dense(units = 5) |>  # the only new line of code!
  layer_dense(units = 1)

# Create and compile model
nn <- keras_model(inputs = input, outputs = output)
# summary(nn)

nn |> 
  compile(
    optimizer = optimizer_adam(learning_rate = 1),
    loss = "mse",
    metrics = metric_root_mean_squared_error()
  )

# Fit model - naive without validation
nn |> 
  fit(
    x = diamonds$carat,
    y = diamonds$price,
    epochs = 30,
    batch_size = 100,
    verbose = 0
  )

# Plot effect of carat on average price
data.frame(carat = seq(0.3, 3, by = 0.1)) |> 
  transform(price = predict(nn, carat, verbose = 0)) |> 
ggplot(aes(x = carat, y = price)) +
  geom_line(color = "chartreuse4") +
  geom_point(color = "chartreuse4")

Comment: Oops, it seems as if the extra hidden layer had no effect. The reason is that a linear function of a linear function is still a linear function. Adding the hidden layer did not really change the capabilities of the model. It just added a lot of unnecessary parameters.

2.3 Step 3: Activation functions

The missing magic component is the so called activation function \sigma after each layer, which transforms the values of the nodes. So far, we have implicitly used “linear activations”, which - in neural network slang - is just the identity function.

Applying non-linear activation functions after hidden layers have the purpose to introduce non-linear and interaction effects. Typical such functions are

  • the hyperbolic tangent \sigma(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} (“S”-shaped function that maps real values to [-1, 1]),
  • the standard logistic function (“sigmoid”) \sigma(x) = 1 / (1 + e^{-x}) (“S”-shaped function that maps real values to [0, 1], shifted and scaled hyperbolic tangent),
  • the rectangular linear unit “ReLU” \sigma(x) = \text{max}(0, x) that sets negative values to 0.

Activation functions applied to the output layer have a different purpose, namely the same as the inverse of the link function of a corresponding GLM. It maps predictions to the scale of the response:

  • identity/“linear” activation \rightarrow usual regression
  • logistic activation \rightarrow binary logistic regression (one probability)
  • softmax activation \rightarrow multinomial logistic regression (one probability per class)
  • exponential activation \rightarrow log-linear regression as with Poisson or Gamma regression

Let us add a hyperbolic tangent activation function (\sigma) after the hidden layer of our simple example.

2.3.1 Example: activation functions

Again, the code is very similar to the last one, with the exception of using a hyperbolic tangent activation after the hidden layer (and different learning rate and number of epochs).

library(ggplot2)
library(keras)

# Input layer: we have 1 covariate
input <- layer_input(shape = 1)

# One hidden layer
output <- input |>
  layer_dense(units = 5, activation = "tanh") |> 
  layer_dense(units = 1, activation = "linear")

# Create and compile model
nn <- keras_model(inputs = input, outputs = output)

nn |> 
  compile(
    optimizer = optimizer_adam(learning_rate = 0.2),
    loss = "mse",
    metrics = metric_root_mean_squared_error()
  )

# Fit model - naive without validation
nn |> 
  fit(
    x = diamonds$carat,
    y = diamonds$price,
    epochs = 50,
    batch_size = 100,
    verbose = 0
  )

# Plot effect of carat on average price
data.frame(carat = seq(0.3, 3, by = 0.1)) |> 
  transform(price = predict(nn, carat, verbose = 0)) |> 
ggplot(aes(x = carat, y = price)) +
  geom_line(color = "chartreuse4") +
  geom_point(color = "chartreuse4")

Comment: Adding the non-linear activation after the hidden layer has changed the model. The effect of carat is now representing the association between carat and price by a non-linear function.

3 Practical Considerations

3.1 Validation and tuning of main parameters

So far, we have naively fitted the neural networks without splitting the data for test and validation. Don’t do this! Usually, one sets a small test dataset (e.g. 10% of rows) aside to assess the final model performance and use simple (or cross-)validation for model tuning.

In order to choose the main tuning parameters, namely

  • network architecture,
  • activation functions,
  • learning rate,
  • batch size, and
  • number of epochs,

one often uses simple validation because cross-validation takes too much time.

3.2 Missing values

A neural net does not accept missing values in the input. They need to be filled, e.g., by a typical value or a value below the minimum.

3.3 Input standardization

Gradient descent starts by random initialization of parameters. This step is optimized for standardized input. Standardization has to be done manually by either

  • min/max scale the values of each input to the range -1 to 1,
  • standard scale the values of each input to mean 0 and standard deviation 1, or
  • use relative ranks.

Note that the scaling transformation is calculated on the training data and then applied to the validation and test data. This usually requires a couple of lines of code.

3.4 Categorical input

There are three common ways to represent categorical input variables in a neural network.

  1. Binary and ordinal categoricals are best represented by integers and then treated as numeric.
  2. Unordered categoricals are either one-hot-encoded (i.e., each category is represented by a binary variable) or
  3. they are represented by a (categorical) embedding. To do so, the categories are integer encoded and then condensed by a special embedding layer to a few (usually 1 or 2) dense features. This requires a more complex network architecture but saves memory and preprocessing. This approach is heavily used when the input consists of words (which is a categorical variable with thousands of levels - one level per word).

For Option 2, input standardization is not required, for Option 3 it must not be applied as the embedding layer expects integers.

3.5 Callbacks

Sometimes, we want to take actions during training, such as

  • stop training when validation performance starts worsening,
  • reduce the learning rate when the optimization is stuck in a “plateau”, or
  • save the network weights between epochs.

Such monitoring tasks are called callbacks. We will see them in the example below.

3.6 Types of layers

So far, we have encountered only dense (= fully connected) layers and activation layers. Here some further types:

  • Embedding layers to represent integer encoded categoricals.
  • Dropout layers to add regularization.
  • Convolutional and pooling layers for image data.
  • Recurrent layers (long-short-term memory LSTM, gated recurrent unit GRU) for sequence data.
  • Concatenation layers to combine different branches of the network (like in a directed graph).
  • Flatten layers to bring higher dimensional layers to dimension 1 (relevant, e.g., for embeddings, image and text data).

3.7 Optimizer

Pure gradient descent is rarely applied without tweaks because it tends to be stuck in local minima, especially for complex networks with non-convex objective surfaces. Modern variants are “adam”, “nadam” and “RMSProp”. These optimizers work usually out-of-the-box, except for the learning rate, which has to be manually chosen.

3.8 Custom losses and evaluation metrics

Frameworks like Keras/TensorFlow offer many predefined loss functions and evaluation metrics. Choosing them is a crucial step, just as with tree boosting. Using TensorFlow’s backend functions, one can define own metrics and loss functions (see exercises).

3.9 Overfitting and regularization

As with linear models, a model with too many parameters will overfit in an undesired way. With about 50 to 100 observations per parameter, overfitting is usually unproblematic. (For image and text data, different rules apply). Besides using less parameters, the main options to reduce overfitting are the following:

  • pull the parameters of a layer slightly towards zero by applying L1 and/or L2 penalties to the parameters,
  • add dropout layers. A dropout layer randomly sets some of the node values of the previous layer to 0, switching them off. This is an elegant way to fight overfitting and is related to bagging.

3.10 Choosing the architecture

How many layers and number of nodes per layer to select? For tabular data, using 1-3 hidden layers is usually enough. If we start with m input variables, the number of nodes in the first hidden layer is usually higher than m and reduces for later layers. There should not be a “representational bottleneck”, i.e., an early hidden layer with too few parameters.

The number of parameters should not be too high compared to the number of rows, see “Overfitting and regularization” above.

3.11 Interpretation

Variable importance of covariates in neural networks can be assessed by permutation importance (how much performance is lost when shuffling column X?) or SHAP importance. Covariate effects can be investigated, e.g., by partial dependence plots or SHAP dependence plots.

4 Example: diamonds

We will now fit a neural net with two hidden layers (30 and 15 nodes) and a total of 631 parameters to model diamond prices. Learning rate, activation functions, and batch size were manually chosen by simple validation. The number of epochs is automatically being chosen by an early stopping callback.

library(ggplot2)
library(withr)
library(keras)
library(MetricsWeighted)
library(hstats)

y <- "price"
xvars <- c("carat", "color", "cut", "clarity")

# Split into train and test
with_seed(
  9838, 
  ix <- sample(nrow(diamonds), 0.8 * nrow(diamonds))
)

train <- diamonds[ix, ]
test <- diamonds[-ix, ]

X_train <- train[, xvars]
X_test <- test[, xvars]

y_train <- train[[y]]
y_test <- test[[y]]

# Standardize X using X_train
temp <- scale(data.matrix(X_train))
sc <- list(
  center = attr(temp, "scaled:center"), 
  scale = attr(temp, "scaled:scale")
)

# Function that maps data to scaled network input
prep_nn <- function(X, sel = xvars, scaling = sc) {
  X <- data.matrix(X[, sel, drop = FALSE])
  scale(X, center = scaling$center, scale = scaling$scale)
}

# Trying to make things reproducible...
k_clear_session()
tensorflow::set_random_seed(499)

# Input layer: we have 4 covariates
input <- layer_input(shape = 4)

# Two hidden layers with contracting number of nodes
output <- input |>
  layer_dense(units = 30, activation = "relu") |> 
  layer_dense(units = 15, activation = "relu") |> 
  layer_dense(units = 1, activation = "linear")

# Create and compile model
nn <- keras_model(inputs = input, outputs = output)
summary(nn)
## Model: "model"
## ________________________________________________________________________________
##  Layer (type)                       Output Shape                    Param #     
## ================================================================================
##  input_1 (InputLayer)               [(None, 4)]                     0           
##  dense_2 (Dense)                    (None, 30)                      150         
##  dense_1 (Dense)                    (None, 15)                      465         
##  dense (Dense)                      (None, 1)                       16          
## ================================================================================
## Total params: 631 (2.46 KB)
## Trainable params: 631 (2.46 KB)
## Non-trainable params: 0 (0.00 Byte)
## ________________________________________________________________________________
nn |> 
  compile(
    optimizer = optimizer_adam(learning_rate = 0.1),
    loss = "mse",
    metrics = metric_root_mean_squared_error()
  )

# Callbacks
cb <- list(
  callback_early_stopping(patience = 20),
  callback_reduce_lr_on_plateau(patience = 5)
)
       
# Fit model
history <- nn |>
  fit(
    x = prep_nn(X_train),
    y = y_train,
    epochs = 100,
    batch_size = 400, 
    validation_split = 0.2,
    callbacks = cb,
    verbose = 0
  )

plot(history, metrics = "root_mean_squared_error", smooth = FALSE)

# Interpret
pred_fun <- function(m, X) predict(m, prep_nn(X), batch_size = 1000, verbose = 0)

# Performance on test data
pred <- pred_fun(nn, X_test)
rmse(y_test, pred)
## [1] 925.7708
r_squared(y_test, pred, reference_mean = mean(y_train))
## [1] 0.9479194
# Permutation importance wrt test data
perm_importance(nn, X = X_test, y = y_test, pred_fun = pred_fun, verbose = FALSE) |> 
  plot(fill = "chartreuse4")

# PDPs
for (v in xvars) {
  p <- partial_dep(nn, v = v, X = X_train, pred_fun = pred_fun) |> 
    plot(color = "chartreuse4") +
    ggtitle(paste("PDP for", v)) 
  print(p)
}

Comment: Performance is lower than of the tree-based models. This might partly be a consequence of effects being smoother, but also because the model has not been refitted on the full training data for simplicity (20% of the training rows are used for validation).

5 Example: Embeddings

Representing categorical input variables through embedding layers is extremely useful in practice. We will end this chapter with an example on how to do it with the claims data. This example also shows how flexible neural network structures are.

library(tidyverse)
library(keras)
library(withr)
library(insuranceData)
library(hstats)
library(MetricsWeighted)

data(dataCar)

# Response and covariates
y <- "clm"
x_emb <- "veh_body"
x_dense <- c("veh_value", "veh_age", "gender", "area", "agecat")
xvars <- c(x_dense, x_emb)

# Split into train and test
with_seed(
  9838,
  ix <- sample(nrow(dataCar), 0.8 * nrow(dataCar))
)

train <- dataCar[ix, ]
test <- dataCar[-ix, ]

y_train <- train[[y]]
y_test <- test[[y]]

X_train <- train[, xvars]
X_test <- test[, xvars]

# Standardization info using X_train
temp <- scale(data.matrix(X_train[, x_dense]))
sc <- list(
  center = attr(temp, "scaled:center"),
  scale = attr(temp, "scaled:scale")
)

# Function that maps data.frame to scaled network input (a list with a dense 
# part and each embedding as separat integer vector)
prep_nn <- function(X, dense = x_dense, emb = x_emb, scaling = sc) {
  X_dense <- data.matrix(X[, dense, drop = FALSE])
  X_dense <- scale(X_dense, center = scaling$center, scale = scaling$scale)
  emb <- lapply(X[emb], function(x) as.integer(x) - 1)
  c(list(dense1 = X_dense), emb)
}

# Trying to make things reproducible...
k_clear_session()
tensorflow::set_random_seed(469)

# Inputs
input_dense <- layer_input(shape = length(x_dense), name = "dense1")
input_veh_body <- layer_input(shape = 1, name = "veh_body")

# Embedding of veh_body
emb_veh_body <- input_veh_body |> 
  layer_embedding(input_dim = nlevels(dataCar$veh_body) + 1, output_dim = 1) |> 
  layer_flatten()

# Combine dense input and embedding
outputs <- list(input_dense, emb_veh_body) |>
  layer_concatenate() |> 
  layer_dense(30, activation = "relu") |> 
  layer_dense(5, activation = "relu") |> 
  layer_dense(1, activation = "sigmoid")

# Input
inputs <- list(dense1 = input_dense, veh_body = input_veh_body)

# Create and compile model
nn <- keras_model(inputs = inputs, outputs = outputs)
summary(nn)
## Model: "model"
## ________________________________________________________________________________
##  Layer (type)           Output Shape            Param   Connected to            
##                                                  #                              
## ================================================================================
##  veh_body (InputLayer)  [(None, 1)]             0       []                      
##  embedding (Embedding)  (None, 1, 1)            14      ['veh_body[0][0]']      
##  dense1 (InputLayer)    [(None, 5)]             0       []                      
##  flatten (Flatten)      (None, 1)               0       ['embedding[0][0]']     
##  concatenate (Concaten  (None, 6)               0       ['dense1[0][0]',        
##  ate)                                                    'flatten[0][0]']       
##  dense_2 (Dense)        (None, 30)              210     ['concatenate[0][0]']   
##  dense_1 (Dense)        (None, 5)               155     ['dense_2[0][0]']       
##  dense (Dense)          (None, 1)               6       ['dense_1[0][0]']       
## ================================================================================
## Total params: 385 (1.50 KB)
## Trainable params: 385 (1.50 KB)
## Non-trainable params: 0 (0.00 Byte)
## ________________________________________________________________________________
nn |> 
  compile(
    optimizer = optimizer_adam(learning_rate = 0.001),
    loss = "binary_crossentropy"
  )

# Callbacks
cb <- list(
  callback_early_stopping(patience = 20),
  callback_reduce_lr_on_plateau(patience = 5)
)

# Fit model
history <- nn |>
  fit(
    x = prep_nn(X_train),
    y = y_train,
    epochs = 100,
    batch_size = 400,
    validation_split = 0.2,
    callbacks = cb,
    verbose = 0
  )

plot(history, metrics = c("loss", "val_loss"), smooth = FALSE)

# Interpret
pred_fun <- function(m, X) predict(m, prep_nn(X), batch_size = 1000, verbose = FALSE)

# Performance on test data
pred <- pred_fun(nn, X_test)
logLoss(y_test, pred)
## [1] 0.2533789
r_squared_bernoulli(y_test, pred, reference_mean = mean(y_train))
## [1] 0.002686216
# Permutation importance
perm_importance(
  nn, X = X_test, y = y_test, loss = "logloss", pred_fun = pred_fun, verbose = FALSE) |> 
  plot(fill = "chartreuse4")

# Partial dependence
for (v in xvars) {
  p <- partial_dep(nn, v = v, X = X_train, pred_fun = pred_fun) |> 
    plot(color = "chartreuse4", rotate_x = (v == "veh_body")) +
    ggtitle(paste("PDP for", v)) 
  print(p)
}

6 Exercises

  1. Fit diamond prices by minimizing Gamma deviance with log-link (-> exponential output activation), using the custom loss function defined below. Tune the model by simple validation and evaluate it on a test dataset. Interpret the final model. Hints: I used a smaller learning rate and had to replace the “relu” activations by “tanh”. Furthermore, the response needed to be transformed from int to float.
loss_gamma <- function(y_true, y_pred) {
  -k_log(y_true / y_pred) + y_true / y_pred
}
  1. Study either the optional claims data example or build your own neural net, predicting claim yes/no. For simplicity, you can represent the categorical feature veh_body by integers.

7 Neural Network Slang

Here, we summarize some of the neural network slang.

  • Activation function: The transformation applied to the node values.
  • Architecture: The layout of layers and nodes.
  • Backpropagation: An efficient way to calculate gradients.
  • Batch: A couple of data rows used for one mini-batch gradient descent step.
  • Callback: An action during training (save weights, reduce learning rate, stop training, …).
  • Epoch: The process of updating the network weights by gradient descent until each observation in the training set was used once.
  • Embedding: A numeric representation of categorical input as learned by the neural net.
  • Encoding: The values of latent variables of a hidden layer, usually the last.
  • Gradient descent: The basic optimization algorithm of neural networks.
  • Keras: User-friendly wrapper of TensorFlow.
  • Layer: Main organizational unit of a neural network.
  • Learning rate: Controls the step size of gradient descent, i.e., how aggressive the network learns.
  • Node: Nodes on the input layer are the covariates, nodes on the output layer the response(s) and nodes on a hidden layer are latent variables representing the covariates for the task to predict the response.
  • Optimizer: The specific variant of gradient descent.
  • PyTorch: An important implementation of neural networks.
  • Stochastic gradient descent (SGD): Mini-batch gradient descent with batches of size 1.
  • TensorFlow: An important implementation of neural networks.
  • Weights: The parameters of a neural net.

8 Chapter Summary

In this chapter, we have glimpsed into the world of neural networks. Step by step we have learned how a neural network works. We have used Keras and TensorFlow to build models brick by brick.

9 Closing Remarks

During this lecture, we have met many ML algorithms and principles. To get used to them, the best approach is practicing. Kaggle is a great place to do so and learn from the best.

A summary and comparison of the algorithms can be found on github. Here a screenshot as per Sept. 7, 2020:

As a final task and motivation, try out my Analysis scheme X. It can be applied very frequently and works as follows:

  1. Take a property T(Y) of key interest, e.g., the churn rate, a claims frequency, or a loss ratio. Calculate its estimate on the full dataset.
  2. Do a descriptive analysis of T(Y \mid X_j) for a couple of important covariates X_1, \dots, X_p in order to study the association between Y and each of the X_j separately.
  3. Complement Step 2 by running a well-built ML model T(Y \mid X_1, \dots, X_p) = f(X_1, \dots, X_p), using a clean validation strategy.
    • Study model performance.
    • Study variable importance and use it to sort the results of Step 2. Which of the associations are very strong/weak?
    • For each X_j, study its partial dependence (or SHAP dependence) plot. They will accompany the bivariate view of Step 2 with a multivariate one, taking into account the effects from the other covariates.

What additional insights did you get from Step 3?

10 Chapter References

[1] P.J. Werbos, “Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences”, Dissertation, 1974.