In this chapter, we dive into artificial neural networks, one of the main drivers of artificial intelligence.
Neural networks are around since many decades. (Maybe) the first such model was built by Marvin Minsky in 1951. He called his algorithm SNARC (“stochastic neural-analog reinforcement calculator”). Since then, neural networks have gone through several stages of development. One of the milestones was the idea in Werbos (1974) to efficiently calculate gradients in the optimization algorithm by an approach called “backpropagation”. Another milestone was the use of GPUs (graphics processing units) to greatly reduce calculation time.
Artificial neural nets are extremely versatile and powerful. They can be used to
In this chapter, we will mainly deal with the first three aspects. Since a lot of new terms are being used, a small glossary can be found in Section “Neural Network Slang”.
To learn how and why neural networks work, we will go through three steps - each illustrated on the diamonds data:
After this, we will be ready to build more complex models.
Let us revisit the simple linear regression \mathbb E(\text{price} \mid \text{carat}) = \alpha + \beta \cdot \text{carat} calculated on the full diamonds data. In Chapter 4 we have found the solution \hat\alpha = -2256.36 and \hat \beta = 7756.43 by linear least-squares.
Above situation can be viewed as a neural network with
carat
and the intercept
called “bias unit” with value 1),price
). Fully connected means that each node of a layer is
a linear function of all node values of the previous layer. Each linear
function has parameters or weights to be estimated, in our
simple case just \alpha and \beta.Visualized as a graph, the situation looks as follows:
Part of the figures were done with this cool webtool.
To gain confidence in neural networks, we first show that the parameters estimated by a neural network are quite similar to those learned by the linear least squares method. We will use Google’s TensorFlow with its flexible functional Keras interface. A great book on Keras (and neural networks in general) is Chollet (2018).
library(ggplot2)
library(keras)
# Input layer: we have 1 covariate
input <- layer_input(shape = 1)
# Output layer connected to the input layer
output <- input |>
layer_dense(units = 1)
# Create and compile model
nn <- keras_model(inputs = input, outputs = output)
# summary(nn)
nn |>
compile(
optimizer = optimizer_adam(learning_rate = 1),
loss = "mse",
metrics = metric_root_mean_squared_error()
)
# Fit model - naive without validation
history <- nn |>
fit(
x = diamonds$carat,
y = diamonds$price,
epochs = 30,
batch_size = 100
)
## Epoch 1/30
## 540/540 - 1s - loss: 27096620.0000 - root_mean_squared_error: 5205.4414 - 1s/epoch - 2ms/step
## Epoch 2/30
## 540/540 - 1s - loss: 20241934.0000 - root_mean_squared_error: 4499.1035 - 896ms/epoch - 2ms/step
## Epoch 3/30
## 540/540 - 1s - loss: 15495953.0000 - root_mean_squared_error: 3936.4900 - 859ms/epoch - 2ms/step
## Epoch 4/30
## 540/540 - 1s - loss: 12341229.0000 - root_mean_squared_error: 3513.0085 - 859ms/epoch - 2ms/step
## Epoch 5/30
## 540/540 - 1s - loss: 10354460.0000 - root_mean_squared_error: 3217.8347 - 867ms/epoch - 2ms/step
## Epoch 6/30
## 540/540 - 1s - loss: 9135827.0000 - root_mean_squared_error: 3022.5530 - 803ms/epoch - 1ms/step
## Epoch 7/30
## 540/540 - 1s - loss: 8336326.0000 - root_mean_squared_error: 2887.2695 - 853ms/epoch - 2ms/step
## Epoch 8/30
## 540/540 - 1s - loss: 7699245.5000 - root_mean_squared_error: 2774.7515 - 848ms/epoch - 2ms/step
## Epoch 9/30
## 540/540 - 1s - loss: 7087159.5000 - root_mean_squared_error: 2662.1719 - 916ms/epoch - 2ms/step
## Epoch 10/30
## 540/540 - 1s - loss: 6467482.0000 - root_mean_squared_error: 2543.1245 - 812ms/epoch - 2ms/step
## Epoch 11/30
## 540/540 - 1s - loss: 5853131.0000 - root_mean_squared_error: 2419.3245 - 840ms/epoch - 2ms/step
## Epoch 12/30
## 540/540 - 1s - loss: 5266740.5000 - root_mean_squared_error: 2294.9380 - 798ms/epoch - 1ms/step
## Epoch 13/30
## 540/540 - 1s - loss: 4726106.5000 - root_mean_squared_error: 2173.9609 - 933ms/epoch - 2ms/step
## Epoch 14/30
## 540/540 - 1s - loss: 4240382.0000 - root_mean_squared_error: 2059.2188 - 917ms/epoch - 2ms/step
## Epoch 15/30
## 540/540 - 1s - loss: 3815855.7500 - root_mean_squared_error: 1953.4215 - 882ms/epoch - 2ms/step
## Epoch 16/30
## 540/540 - 1s - loss: 3457884.2500 - root_mean_squared_error: 1859.5387 - 899ms/epoch - 2ms/step
## Epoch 17/30
## 540/540 - 1s - loss: 3165114.2500 - root_mean_squared_error: 1779.0768 - 867ms/epoch - 2ms/step
## Epoch 18/30
## 540/540 - 1s - loss: 2933515.0000 - root_mean_squared_error: 1712.7507 - 792ms/epoch - 1ms/step
## Epoch 19/30
## 540/540 - 1s - loss: 2757288.0000 - root_mean_squared_error: 1660.5083 - 814ms/epoch - 2ms/step
## Epoch 20/30
## 540/540 - 1s - loss: 2628982.5000 - root_mean_squared_error: 1621.4137 - 837ms/epoch - 2ms/step
## Epoch 21/30
## 540/540 - 1s - loss: 2540100.0000 - root_mean_squared_error: 1593.7692 - 853ms/epoch - 2ms/step
## Epoch 22/30
## 540/540 - 1s - loss: 2481616.7500 - root_mean_squared_error: 1575.3148 - 931ms/epoch - 2ms/step
## Epoch 23/30
## 540/540 - 1s - loss: 2445467.0000 - root_mean_squared_error: 1563.7990 - 873ms/epoch - 2ms/step
## Epoch 24/30
## 540/540 - 1s - loss: 2424014.0000 - root_mean_squared_error: 1556.9246 - 831ms/epoch - 2ms/step
## Epoch 25/30
## 540/540 - 1s - loss: 2411806.2500 - root_mean_squared_error: 1552.9991 - 840ms/epoch - 2ms/step
## Epoch 26/30
## 540/540 - 1s - loss: 2405250.7500 - root_mean_squared_error: 1550.8871 - 943ms/epoch - 2ms/step
## Epoch 27/30
## 540/540 - 1s - loss: 2401943.2500 - root_mean_squared_error: 1549.8204 - 910ms/epoch - 2ms/step
## Epoch 28/30
## 540/540 - 1s - loss: 2400009.0000 - root_mean_squared_error: 1549.1963 - 875ms/epoch - 2ms/step
## Epoch 29/30
## 540/540 - 1s - loss: 2399141.0000 - root_mean_squared_error: 1548.9160 - 840ms/epoch - 2ms/step
## Epoch 30/30
## 540/540 - 1s - loss: 2398710.5000 - root_mean_squared_error: 1548.7771 - 805ms/epoch - 1ms/step
plot(history, metrics = "root_mean_squared_error")
unlist(get_weights(nn))
## [1] 7723.831 -2211.618
# Plot effect of carat on average price
data.frame(carat = seq(0.3, 3, by = 0.1)) |>
transform(price = predict(nn, carat, verbose = 0)) |>
ggplot(aes(x = carat, y = price)) +
geom_line(color = "chartreuse4") +
geom_point(color = "chartreuse4")
Comment: The solution of the simple neural network is indeed quite similar to the OLS solution.
Neural nets are typically fitted by mini-batch gradient descent, using backpropagation to efficiently calculate gradients. It works as follows:
Let f_\beta denote a neural net with parameters \beta, and Q(f_\beta, D) = \sum_{(y_i, \boldsymbol x_i) \in D} L(y_i, f_\beta(\boldsymbol x_i)) its total loss on a data set D with respect to the loss function L.
Gradient descent on batches of size 1 is called “stochastic gradient descent” (SGD). Further note that there is no guarantee that the algorithm reaches a global minimum.
The missing magic component is the so called activation function \sigma after each layer, which transforms the values of the nodes. So far, we have implicitly used “linear activations”, which - in neural network slang - is just the identity function.
Applying non-linear activation functions after hidden layers have the purpose to introduce non-linear and interaction effects. Typical such functions are
Activation functions applied to the output layer have a different purpose, namely the same as the inverse of the link function of a corresponding GLM. It maps predictions to the scale of the response:
Let us add a hyperbolic tangent activation function (\sigma) after the hidden layer of our simple example.
Again, the code is very similar to the last one, with the exception of using a hyperbolic tangent activation after the hidden layer (and different learning rate and number of epochs).
library(ggplot2)
library(keras)
# Input layer: we have 1 covariate
input <- layer_input(shape = 1)
# One hidden layer
output <- input |>
layer_dense(units = 5, activation = "tanh") |>
layer_dense(units = 1, activation = "linear")
# Create and compile model
nn <- keras_model(inputs = input, outputs = output)
nn |>
compile(
optimizer = optimizer_adam(learning_rate = 0.2),
loss = "mse",
metrics = metric_root_mean_squared_error()
)
# Fit model - naive without validation
nn |>
fit(
x = diamonds$carat,
y = diamonds$price,
epochs = 50,
batch_size = 100,
verbose = 0
)
# Plot effect of carat on average price
data.frame(carat = seq(0.3, 3, by = 0.1)) |>
transform(price = predict(nn, carat, verbose = 0)) |>
ggplot(aes(x = carat, y = price)) +
geom_line(color = "chartreuse4") +
geom_point(color = "chartreuse4")
Comment: Adding the non-linear activation after the hidden layer has changed the model. The effect of carat is now representing the association between carat and price by a non-linear function.
So far, we have naively fitted the neural networks without splitting the data for test and validation. Don’t do this! Usually, one sets a small test dataset (e.g. 10% of rows) aside to assess the final model performance and use simple (or cross-)validation for model tuning.
In order to choose the main tuning parameters, namely
one often uses simple validation because cross-validation takes too much time.
A neural net does not accept missing values in the input. They need to be filled, e.g., by a typical value or a value below the minimum.
Gradient descent starts by random initialization of parameters. This step is optimized for standardized input. Standardization has to be done manually by either
Note that the scaling transformation is calculated on the training data and then applied to the validation and test data. This usually requires a couple of lines of code.
There are three common ways to represent categorical input variables in a neural network.
For Option 2, input standardization is not required, for Option 3 it must not be applied as the embedding layer expects integers.
Sometimes, we want to take actions during training, such as
Such monitoring tasks are called callbacks. We will see them in the example below.
So far, we have encountered only dense (= fully connected) layers and activation layers. Here some further types:
Pure gradient descent is rarely applied without tweaks because it tends to be stuck in local minima, especially for complex networks with many layers. Modern variants are “adam”, “nadam” and “RMSProp”. These optimizers work usually out-of-the-box, except for the learning rate, which has to be chosen manually.
Frameworks like Keras/TensorFlow offer many predefined loss functions and evaluation metrics. Choosing them is a crucial step, just as with tree boosting. Using TensorFlow’s backend functions, one can define own metrics and loss functions (see exercises).
As with linear models, a model with too many parameters will overfit in an undesirable way. With about 50 to 100 observations per parameter, overfitting is usually not problematic. (Different rules apply to image and text data). Besides using fewer parameters, the main ways to reduce overfitting are as follows:
How many layers and how many nodes per layer should be selected? For tabular data, up to three hidden layers are usually sufficient. If we start with p input variables, the number of nodes in the first hidden layer is usually higher than p and reduces for later layers. There should not be a “representational bottleneck”, i.e., an early hidden layer with too few parameters.
The number of parameters should not be too high compared to the number of rows, see “Overfitting and regularization” above.
Variable importance of covariates in neural networks can be assessed by permutation importance (see below) or SHAP importance (not covered). Covariate effects can be investigated, e.g., by partial dependence plots or SHAP dependence plots.
We will now fit a neural net with two hidden layers (30 and 15 nodes) and a total of 631 parameters to model diamond prices. Learning rate, activation functions, and batch size were manually chosen by simple validation. The number of epochs is automatically being chosen by an early stopping callback.
library(ggplot2)
library(withr)
library(keras)
library(MetricsWeighted)
library(hstats)
diamonds <- transform(diamonds, log_price = log(price), log_carat = log(carat))
y <- "log_price"
xvars <- c("log_carat", "color", "cut", "clarity")
# Split into train and test
with_seed(
9838,
ix <- sample(nrow(diamonds), 0.8 * nrow(diamonds))
)
train <- diamonds[ix, ]
test <- diamonds[-ix, ]
X_train <- train[, xvars]
X_test <- test[, xvars]
y_train <- train[[y]]
y_test <- test[[y]]
# Standardization information using X_train
temp <- scale(data.matrix(X_train))
sc <- list(
center = attr(temp, "scaled:center"),
scale = attr(temp, "scaled:scale")
)
# Function that maps data to scaled network input
prep_nn <- function(X, sel = xvars, scaling = sc) {
X <- data.matrix(X[, sel, drop = FALSE])
scale(X, center = scaling$center, scale = scaling$scale)
}
# Trying to make things reproducible...
k_clear_session()
tensorflow::set_random_seed(499)
# Input layer: we have 4 covariates
input <- layer_input(shape = 4)
# Two hidden layers with contracting number of nodes
output <- input |>
layer_dense(units = 30, activation = "relu") |>
layer_dense(units = 15, activation = "relu") |>
layer_dense(units = 1, activation = "linear")
# Create and compile model
nn <- keras_model(inputs = input, outputs = output)
summary(nn)
## Model: "model"
## ________________________________________________________________________________
## Layer (type) Output Shape Param #
## ================================================================================
## input_1 (InputLayer) [(None, 4)] 0
## dense_2 (Dense) (None, 30) 150
## dense_1 (Dense) (None, 15) 465
## dense (Dense) (None, 1) 16
## ================================================================================
## Total params: 631 (2.46 KB)
## Trainable params: 631 (2.46 KB)
## Non-trainable params: 0 (0.00 Byte)
## ________________________________________________________________________________
nn |>
compile(
optimizer = optimizer_adam(learning_rate = 0.05),
loss = "mse",
metrics = metric_root_mean_squared_error()
)
# Callbacks
cb <- list(
callback_early_stopping(patience = 20),
callback_reduce_lr_on_plateau(patience = 5)
)
# Fit model
history <- nn |>
fit(
x = prep_nn(X_train),
y = y_train,
epochs = 100,
batch_size = 400,
validation_split = 0.2,
callbacks = cb,
verbose = 0
)
plot(history, metrics = "root_mean_squared_error", smooth = FALSE) +
coord_cartesian(xlim = c(2, length(history$metrics$loss)), ylim = c(0.1, 0.2))
# Interpret
pred_fun <- function(m, X) predict(m, prep_nn(X), batch_size = 1000, verbose = 0)
# Performance on validation data
pred <- pred_fun(nn, X_test)
rmse(y_test, pred)
## [1] 0.1184944
r_squared(y_test, pred, reference_mean = mean(y_train))
## [1] 0.9864616
# Variable importance?
# PDPs
for (v in xvars) {
p <- partial_dep(nn, v = v, X = X_train, pred_fun = pred_fun) |>
plot(color = "chartreuse4") +
ggtitle(paste("PDP for", v))
print(p)
}
Comment: Performance is a bit lower than of the tree-based models. This might partly be a consequence of effects being smoother, but also because the model has not been refitted on the full training data for simplicity (20% of the training rows are used for validation). Compared to our tree-based models, one important aspect in model interpretation is missing: variable importance.
So far, we have measured variable importance by model-specific approaches such as split-gain importance for trees. A model-agnostic technique is called permutation importance introduced in Breiman (2001) for random forests and later generalized by Fisher, Rudin, and Dominici (2018) to other models. It measures by how much a relevant performance measure S (e.g., the average deviance or the RMSE) of a fitted model \hat f, evaluated on a dataset D, worsens after randomly shuffling the values of the j-th feature. The larger the performance change, the more important the feature.
Formally, permutation importance \text{PVI}(j, D) of the j-th feature X^{(j)} and data D can be defined as follows: \text{PVI}(j, D) = S(\hat f, D^{(j)}) - S(\hat f, D), where D^{(j)} is a version of D with randomly permuted values in the column representing the j-th feature X^{(j)}. For simplicity, we assume that the performance measure S is defined such that smaller values mean better performance.
If there are p features and n is the sample size of D, then n(p+1) predictions have to be calculated. Compared to other methods from XAI, this is a relatively cheap operation. Note that the process can be repeated multiple times to increase (and assess) robustness of the results.
Most of the text of this section is taken from our responsible ML lecture.
perm_importance(nn, X = X_test, y = y_test, pred_fun = pred_fun, verbose = FALSE) |>
plot(fill = "chartreuse4")
Comment: As in all our models on diamond prices, the dominating feature is the diamond size.
The last example on neural networks aims to show
To do so, we create a model for (log) taxi trip durations, using the same features as before. We use the same train/test split as with the corresponding boosted trees model. Slight tuning (mainly the learning rate and the architecture) is done manually. The number of epochs is chosen by an early stopping callback. Pick-up location is modeled by a ten-dimensional embedding layer.
library(arrow)
library(data.table)
library(withr)
library(ggplot2)
library(keras)
library(MetricsWeighted)
library(hstats)
df <- read_parquet("taxi/yellow_tripdata_2018-01.parquet")
setDT(df)
head(df)
#=======================================================================
# Prepare data for modeling
#=======================================================================
df[, duration := as.numeric(
difftime(tpep_dropoff_datetime, tpep_pickup_datetime, units = "mins")
)]
df = df[between(trip_distance, 0.2, 100) & between(duration, 1, 120)]
df[, `:=`(
pu_hour = factor(data.table::hour(tpep_pickup_datetime)),
weekday = factor(data.table::wday(tpep_pickup_datetime)), # 1 = Sunday
pu_loc = forcats::fct_lump_min(factor(PULocationID), 1e5),
log_duration = log(duration),
log_distance = log(trip_distance)
)]
y <- "log_duration"
x_dense <- c("log_distance", "weekday", "pu_hour")
xvars <- c(x_dense, "pu_loc")
# Random split
with_seed(1,
ix <- sample(nrow(df), 0.8 * nrow(df))
)
train <- as.data.frame(df[ix, c(y, xvars), with = FALSE])
test <- as.data.frame(df[-ix, c(y, xvars), with = FALSE])
y_train <- train[[y]]
y_test <- test[[y]]
X_train <- train[, xvars]
X_test <- test[, xvars]
# Standardization info using X_train
temp <- scale(data.matrix(X_train[, x_dense]))
sc <- list(
center = attr(temp, "scaled:center"),
scale = attr(temp, "scaled:scale")
)
# Function that maps data.frame to network input
# (a list with a scaled matrix and an integer vector representing pu_loc)
prep_nn <- function(X, dense = x_dense, scaling = sc) {
X_dense <- data.matrix(X[, dense, drop = FALSE])
X_dense <- scale(X_dense, center = scaling$center, scale = scaling$scale)
list(dense1 = X_dense, pu_loc = as.integer(X$pu_loc) - 1)
}
# Does prep_nn() really work?
prep_nn(head(test, 1))
## $dense1
## log_distance weekday pu_hour
## 1 0.478163 -1.031 -1.995948
## attr(,"scaled:center")
## log_distance weekday pu_hour
## 0.5651471 4.0010674 14.7945473
## attr(,"scaled:scale")
## log_distance weekday pu_hour
## 0.8953113 1.9409001 6.4102621
##
## $pu_loc
## [1] 31
# Trying to make things reproducible...
k_clear_session()
tensorflow::set_random_seed(469)
# Inputs
input_dense <- layer_input(shape = length(x_dense), name = "dense1")
input_pu_loc <- layer_input(shape = 1, name = "pu_loc")
# Embedding of pu_loc
emb_pu_loc <- input_pu_loc |>
layer_embedding(input_dim = nlevels(train$pu_loc) + 1, output_dim = 10) |>
layer_flatten()
# Combine dense input and embedding, and add dense layers
outputs <- list(input_dense, emb_pu_loc) |>
layer_concatenate() |>
layer_dense(100, activation = "relu") |>
layer_dense(10, activation = "relu") |>
layer_dense(1, activation = "linear")
# Input
inputs <- list(dense1 = input_dense, pu_loc = input_pu_loc)
# Create and compile model
nn <- keras_model(inputs = inputs, outputs = outputs)
summary(nn)
## Model: "model"
## ________________________________________________________________________________
## Layer (type) Output Shape Param Connected to
## #
## ================================================================================
## pu_loc (InputLayer) [(None, 1)] 0 []
## embedding (Embedding) (None, 1, 10) 390 ['pu_loc[0][0]']
## dense1 (InputLayer) [(None, 3)] 0 []
## flatten (Flatten) (None, 10) 0 ['embedding[0][0]']
## concatenate (Concaten (None, 13) 0 ['dense1[0][0]',
## ate) 'flatten[0][0]']
## dense_2 (Dense) (None, 100) 1400 ['concatenate[0][0]']
## dense_1 (Dense) (None, 10) 1010 ['dense_2[0][0]']
## dense (Dense) (None, 1) 11 ['dense_1[0][0]']
## ================================================================================
## Total params: 2811 (10.98 KB)
## Trainable params: 2811 (10.98 KB)
## Non-trainable params: 0 (0.00 Byte)
## ________________________________________________________________________________
nn |>
compile(optimizer = optimizer_adam(learning_rate = 0.005), loss = "mse")
# Callbacks
cb <- list(
callback_early_stopping(patience = 10),
callback_reduce_lr_on_plateau(patience = 5)
)
# Fit model
if (FALSE) { # Set to true to refit
history <- nn |>
fit(
x = prep_nn(X_train),
y = y_train,
epochs = 100,
batch_size = 50000,
validation_split = 0.2,
callbacks = cb,
verbose = 0
)
plot(history, metrics = c("loss", "val_loss"), smooth = FALSE) +
coord_cartesian(ylim = c(0, 0.15))
save_model_weights_hdf5(nn, "taxi/taxi_nn.h5")
} else {
load_model_weights_hdf5(nn, "taxi/taxi_nn.h5")
}
#=======================================================================
# Inspect
#=======================================================================
pred_fun <- function(m, X) predict(m, prep_nn(X), batch_size = 50000, verbose = 0)
# Performance on test data
pred <- pred_fun(nn, X_test)
rmse(y_test, pred)
## [1] 0.304573
r_squared(y_test, pred, reference_mean = mean(y_train))
## [1] 0.8195031
# Permutation importance
perm_importance(nn, X = X_test, y = y_test, pred_fun = pred_fun, verbose = FALSE) |>
plot(fill = "chartreuse4")
# PDPs
for (v in xvars) {
p <- partial_dep(nn, v = v, X = X_train, pred_fun = pred_fun) |>
plot(color = "chartreuse4", rotate_x = (v == "pu_loc")) +
ggtitle(paste("PDP for", v))
print(p)
}
Comment: The resulting model performs quite similarly to the boosted trees model. As in the diamonds example above, we used an early-stopping callback on 20% of the training rows to automatically find a good number of epochs. And again, for simplicity, we did not refit the model on the full training data. Thus, the final model was trained on less rows than the LightGBM model.
Here, we summarize some of the neural network slang.
Let T(Y) be a quantity of interest, e.g., an average diamond price, claims frequency, or the success probability of a medical intervention. Many interesting insights can be found by the following analysis schema:
Let’s apply above scheme for diamond prices.
library(ggplot2)
library(ranger)
library(hstats)
library(withr)
# STEP 1
mean(diamonds$price)
## [1] 3932.8
# STEP 2
xvars <- c("carat", "color", "cut", "clarity")
for (v in xvars) {
p <- ggplot(diamonds, aes(.data[[v]], price)) +
stat_summary(fun = "mean", color = "chartreuse4") +
if (!ggplot2:::is.discrete(diamonds[[v]])) scale_x_binned()
print(p)
}
# STEP 3
with_seed(
9838,
ix <- sample(nrow(diamonds), 0.8 * nrow(diamonds))
)
fit <- ranger(
reformulate(xvars, "price"),
num.trees = 500,
data = diamonds[ix, ],
seed = 83
)
fit
## Ranger result
##
## Call:
## ranger(reformulate(xvars, "price"), num.trees = 500, data = diamonds[ix, ], seed = 83)
##
## Type: Regression
## Number of trees: 500
## Sample size: 43152
## Number of independent variables: 4
## Mtry: 2
## Target node size: 5
## Variable importance mode: none
## Splitrule: variance
## OOB prediction error (MSE): 308191.3
## R squared (OOB): 0.9804707
# Importance wrt test MSE
imp <- perm_importance(
fit, X = diamonds[-ix, xvars], y = diamonds$price[-ix], verbose = FALSE
)
plot(imp, fill = "chartreuse4")
# Add PDP to descriptive effect
for (v in xvars) {
p <- partial_dep(fit, v = v, X = diamonds[ix, ]) |>
plot(color = "chartreuse4") +
ggtitle(paste("PDP for", v))
print(p)
}
Selected insights
Explain why a neural network is conceptually closer to a GLM than to a random forest. Why are activation functions important?
Fit a neural network to (non-logarithmic) diamond prices by minimizing Gamma deviance with log-link (-> exponential output activation). Use the custom loss function defined below. Tune the model by simple validation and evaluate it on a test dataset. Study test performance, permutation importance, and partial dependence plots. Hints: Play with activation functions. Furthermore, the response needs to be transformed from int to float for certain TensorFlow versions.
loss_gamma <- function(y_true, y_pred) {
-k_log(y_true / y_pred) + y_true / y_pred
}
(Optional) Craft a neural net for the “dataCar” dataset. The binary response is “clm”, and the features are “veh_value”, “veh_age”, “gender”, “area”, “agecat”, and “veh_body”. Represent the latter with a one-dimensional embedding layer. You can either write the code from scratch or modify above taxi trip code. Work with a clean train/test split. Study test performance, permutation importance, and partial dependence plots.
Choose a data set of your interest and apply “Analysis Scheme X” to it.
In this chapter, we have glimpsed into the world of neural networks and deep learning. Step by step we have learned how a neural network works. We have used Keras and TensorFlow to build models brick by brick. Furthermore, we met a simple analysis scheme involving ML that can be applied to many analysis situations.
Here a short comparison
of some properties of the ML algorithms learned in this lecture
(screenshot as per Sept. 7, 2020):