60

Getting started with Tensorflow, Keras in Python and R

 4 years ago
source link: https://www.tuicool.com/articles/nuQjiiA
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

The Pale Blue Dot

“From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Consider again that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there—on the mote of dust suspended in a sunbeam.”

Carl Sagan

Tensorflow and Keras are Deep Learning frameworks that really simplify a lot of things to the user. If you are familiar with Machine Learning and Deep Learning concepts then Tensorflow and Keras are really a playground to realize your ideas.  In this post I show how you can get started with Tensorflow in both Python and R

Tensorflow in Python

For tensorflow in Python, I found Google’s Colab an ideal environment for running your Deep Learning code. This is an Google’s research project  where you can execute your code  on GPUs, TPUs etc

Tensorflow in R (RStudio)

To execute tensorflow in R (RStudio) you need to install tensorflow and keras as shown below

In this post I show how to get started with Tensorflow and Keras in R.

# Install Tensorflow in RStudio
#install_tensorflow()
# Install Keras
#install_packages("keras")
library(tensorflow)
libary(keras)

This post takes 3 different Machine Learning problems and uses the

Tensorflow/Keras framework to solve it

Note:

You can view the Google Colab notebook at Tensorflow in Python

The RMarkdown file has been published at RPubs and can be accessed

at Getting started with Tensorflow in R

ZZnYNvE.png!web Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as  paperback  ($14.99) and in  kindle version ($9.99/Rs449).

1. Multivariate regression with Tensorflow – Python

This code performs multivariate regression using Tensorflow and keras on the advent of Parkinson disease through sound recordings see Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set  . The clinician’s motorUPDRS score has to be predicted from the set of features

In [0]:

# Import tensorflow
import tensorflow as tf
from tensorflow import keras

In [2]:

#Get the data rom the UCI Machine Learning repository
dataset = keras.utils.get_file("parkinsons_updrs.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data")
Downloading data from https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data
917504/911261 [==============================] - 0s 0us/step

In [3]:

# Read the CSV file 
import pandas as pd
parkinsons = pd.read_csv(dataset, na_values = "?", comment='\t',
                      sep=",", skipinitialspace=True)
print(parkinsons.shape)
print(parkinsons.columns)
#Check if there are any NAs in the rows
parkinsons.isna().sum()
(5875, 22)
Index(['subject#', 'age', 'sex', 'test_time', 'motor_UPDRS', 'total_UPDRS',
       'Jitter(%)', 'Jitter(Abs)', 'Jitter:RAP', 'Jitter:PPQ5', 'Jitter:DDP',
       'Shimmer', 'Shimmer(dB)', 'Shimmer:APQ3', 'Shimmer:APQ5',
       'Shimmer:APQ11', 'Shimmer:DDA', 'NHR', 'HNR', 'RPDE', 'DFA', 'PPE'],
      dtype='object')

Out[3]:

subject#         0
age              0
sex              0
test_time        0
motor_UPDRS      0
total_UPDRS      0
Jitter(%)        0
Jitter(Abs)      0
Jitter:RAP       0
Jitter:PPQ5      0
Jitter:DDP       0
Shimmer          0
Shimmer(dB)      0
Shimmer:APQ3     0
Shimmer:APQ5     0
Shimmer:APQ11    0
Shimmer:DDA      0
NHR              0
HNR              0
RPDE             0
DFA              0
PPE              0
dtype: int64

Note: To see how to create dummy variables see my post Practical Machine Learning with R and Python – Part 2

In [4]:

# Drop the columns subject number as it is not relevant
parkinsons1=parkinsons.drop(['subject#'],axis=1)

# Create dummy variables for sex (M/F)
parkinsons2=pd.get_dummies(parkinsons1,columns=['sex'])
parkinsons2.head()

Out[4]
age test_time motor_UPDRS total_UPDRS Jitter(%) Jitter(Abs) Jitter:RAP Jitter:PPQ5 Jitter:DDP Shimmer Shimmer(dB) Shimmer:APQ3 Shimmer:APQ5 Shimmer:APQ11 Shimmer:DDA NHR HNR RPDE DFA PPE sex_0 sex_1
0 72 5.6431 28.199 34.398 0.00662 0.000034 0.00401 0.00317 0.01204 0.02565 0.230 0.01438 0.01309 0.01662 0.04314 0.014290 21.640 0.41888 0.54842 0.16006 1 0
1 72 12.6660 28.447 34.894 0.00300 0.000017 0.00132 0.00150 0.00395 0.02024 0.179 0.00994 0.01072 0.01689 0.02982 0.011112 27.183 0.43493 0.56477 0.10810 1 0
2 72 19.6810 28.695 35.389 0.00481 0.000025 0.00205 0.00208 0.00616 0.01675 0.181 0.00734 0.00844 0.01458 0.02202 0.020220 23.047 0.46222 0.54405 0.21014 1 0
3 72 25.6470 28.905 35.810 0.00528 0.000027 0.00191 0.00264 0.00573 0.02309 0.327 0.01106 0.01265 0.01963 0.03317 0.027837 24.445 0.48730 0.57794 0.33277 1 0
4 72 33.6420 29.187 36.375 0.00335 0.000020 0.00093 0.00130 0.00278 0.01703 0.176 0.00679 0.00929 0.01819 0.02036 0.011625 26.126 0.47188 0.56122 0.19361 1 0
# Create a training and test data set with 80%/20%
train_dataset = parkinsons2.sample(frac=0.8,random_state=0)
test_dataset = parkinsons2.drop(train_dataset.index)

# Select columns
train_dataset1= train_dataset[['age', 'test_time', 'Jitter(%)', 'Jitter(Abs)',
       'Jitter:RAP', 'Jitter:PPQ5', 'Jitter:DDP', 'Shimmer', 'Shimmer(dB)',
       'Shimmer:APQ3', 'Shimmer:APQ5', 'Shimmer:APQ11', 'Shimmer:DDA', 'NHR',
       'HNR', 'RPDE', 'DFA', 'PPE', 'sex_0', 'sex_1']]
test_dataset1= test_dataset[['age','test_time', 'Jitter(%)', 'Jitter(Abs)',
       'Jitter:RAP', 'Jitter:PPQ5', 'Jitter:DDP', 'Shimmer', 'Shimmer(dB)',
       'Shimmer:APQ3', 'Shimmer:APQ5', 'Shimmer:APQ11', 'Shimmer:DDA', 'NHR',
       'HNR', 'RPDE', 'DFA', 'PPE', 'sex_0', 'sex_1']]

In [7]:

# Generate the statistics of the columns for use in normalization of the data
train_stats = train_dataset1.describe()
train_stats = train_stats.transpose()
train_stats

Out[7]:

count mean std min 25% 50% 75% max

age 4700.0 64.792766 8.870401 36.000000 58.000000 65.000000 72.000000 85.000000

test_time 4700.0 93.399490 53.630411 -4.262500 46.852250 93.405000 139.367500 215.490000

Jitter(%) 4700.0 0.006136 0.005612 0.000830 0.003560 0.004900 0.006770 0.099990

Jitter(Abs) 4700.0 0.000044 0.000036 0.000002 0.000022 0.000034 0.000053 0.000396

Jitter:RAP 4700.0 0.002969 0.003089 0.000330 0.001570 0.002235 0.003260 0.057540

Jitter:PPQ5 4700.0 0.003271 0.003760 0.000430 0.001810 0.002480 0.003460 0.069560

Jitter:DDP 4700.0 0.008908 0.009267 0.000980 0.004710 0.006705 0.009790 0.172630

Shimmer 4700.0 0.033992 0.025922 0.003060 0.019020 0.027385 0.039810 0.268630

Shimmer(dB) 4700.0 0.310487 0.231016 0.026000 0.175000 0.251000 0.363250 2.107000

Shimmer:APQ3 4700.0 0.017125 0.013275 0.001610 0.009190 0.013615 0.020562 0.162670

Shimmer:APQ5 4700.0 0.020151 0.016848 0.001940 0.010750 0.015785 0.023733 0.167020

Shimmer:APQ11 4700.0 0.027508 0.020270 0.002490 0.015630 0.022685 0.032713 0.275460

Shimmer:DDA 4700.0 0.051375 0.039826 0.004840 0.027567 0.040845 0.061683 0.488020

NHR 4700.0 0.032116 0.060206 0.000304 0.010827 0.018403 0.031452 0.748260

HNR 4700.0 21.704631 4.288853 1.659000 19.447750 21.973000 24.445250 37.187000

RPDE 4700.0 0.542549 0.100212 0.151020 0.471235 0.543490 0.614335 0.966080

DFA 4700.0 0.653015 0.070446 0.514040 0.596470 0.643285 0.710618 0.865600

PPE 4700.0 0.219559 0.091506 0.021983 0.156470 0.205340 0.264017 0.731730

sex_0 4700.0 0.681489 0.465948 0.000000 0.000000 1.000000 1.000000 1.000000

sex_1 4700.0 0.318511 0.465948 0.000000 0.000000 0.000000 1.000000 1.000000

In [0]:

# Create the target variable
train_labels = train_dataset.pop('motor_UPDRS')
test_labels = test_dataset.pop('motor_UPDRS')

In [0]:

# Normalize the data by subtracting the mean and dividing by the standard deviation
def normalize(x):
  return (x - train_stats['mean']) / train_stats['std']

# Create normalized training and test data
normalized_train_data = normalize(train_dataset1)
normalized_test_data = normalize(test_dataset1)

In [0]:

# Create a Deep Learning model with keras
model = tf.keras.Sequential([
    keras.layers.Dense(6, activation=tf.nn.relu, input_shape=[len(train_dataset1.keys())]),
    keras.layers.Dense(9, activation=tf.nn.relu),
    keras.layers.Dense(6,activation=tf.nn.relu),
    keras.layers.Dense(1)
  ])

# Use the Adam optimizer with a learning rate of 0.01
optimizer=keras.optimizers.Adam(lr=.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

# Set the metrics required to be Mean Absolute Error and Mean Squared Error.For regression, the loss is mean_squared_error
model.compile(loss='mean_squared_error',
                optimizer=optimizer,
                metrics=['mean_absolute_error', 'mean_squared_error'])

In [0]:

# Create a model
history=model.fit(
  normalized_train_data, train_labels,
  epochs=1000, validation_data = (normalized_test_data,test_labels), verbose=0)

In [26]:

hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()

Out[26]:

loss mean_absolute_error mean_squared_error val_loss val_mean_absolute_error val_mean_squared_error epoch 995 15.773989 2.936990 15.773988 16.980803 3.028168 16.980803 995 996 15.238623 2.873420 15.238622 17.458752 3.101033 17.458752 996 997 15.437594 2.895500 15.437593 16.926016 2.971508 16.926018 997 998 15.867891 2.943521 15.867892 16.950249 2.985036 16.950249 998 999 15.846878 2.938914 15.846880 17.095623 3.014504 17.095625 999

In [30]:

def plot_history(history):
  hist = pd.DataFrame(history.history)
  hist['epoch'] = history.epoch

  plt.figure()
  plt.xlabel('Epoch')
  plt.ylabel('Mean Abs Error')
  plt.plot(hist['epoch'], hist['mean_absolute_error'],
           label='Train Error')
  plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
           label = 'Val Error')
  plt.ylim([2,5])
  plt.legend()

  plt.figure()
  plt.xlabel('Epoch')
  plt.ylabel('Mean Square Error ')
  plt.plot(hist['epoch'], hist['mean_squared_error'],
           label='Train Error')
  plt.plot(hist['epoch'], hist['val_mean_squared_error'],
           label = 'Val Error')
  plt.ylim([10,40])
  plt.legend()
  plt.show()


plot_history(history)

fuaYzqQ.png!web

ZfeMVnR.png!web

Observation

It can be seen that the mean absolute error is on an average about +/- 4.0. The validation error also is about the same. This can be reduced by playing around with the hyperparamaters and increasing the number of iterations

1a. Multivariate Regression in Tensorflow – R

# Install Tensorflow in RStudio
#install_tensorflow()
# Install Keras
#install_packages("keras")
library(tensorflow)
library(keras)
library(dplyr)
library(dummies)
## dummies-1.5.6 provided by Decision Patterns
library(tensorflow)
library(keras)

Multivariate regression

This code performs multivariate regression using Tensorflow and keras on the advent of Parkinson disease through sound recordings see Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set . The clinician’s motorUPDRS score has to be predicted from the set of features.

Read the data

# Download the Parkinson's data from UCI Machine Learning repository
dataset <- read.csv("https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data")

# Set the column names
names(dataset) <- c("subject","age", "sex", "test_time","motor_UPDRS","total_UPDRS","Jitter","Jitter.Abs",
                 "Jitter.RAP","Jitter.PPQ5","Jitter.DDP","Shimmer", "Shimmer.dB", "Shimmer.APQ3",
                 "Shimmer.APQ5","Shimmer.APQ11","Shimmer.DDA", "NHR","HNR", "RPDE", "DFA","PPE")

# Remove the column 'subject' as it is not relevant to analysis
dataset1 <- subset(dataset, select = -c(subject))

# Make the column 'sex' as a factor for using dummies
dataset1$sex=as.factor(dataset1$sex)
# Add dummy variables for categorical cariable 'sex'
dataset2 <- dummy.data.frame(dataset1, sep = ".")
## Warning in model.matrix.default(~x - 1, model.frame(~x - 1), contrasts =
## FALSE): non-list contrasts argument ignored
dataset3 <- na.omit(dataset2)

Split the data as training and test in 80/20

## Split data 80% training and 20% test
sample_size <- floor(0.8 * nrow(dataset3))

## set the seed to make your partition reproducible
set.seed(12)
train_index <- sample(seq_len(nrow(dataset3)), size = sample_size)

train_dataset <- dataset3[train_index, ]
test_dataset <- dataset3[-train_index, ]

train_data <- train_dataset %>% select(sex.0,sex.1,age, test_time,Jitter,Jitter.Abs,Jitter.PPQ5,Jitter.DDP,
                              Shimmer, Shimmer.dB,Shimmer.APQ3,Shimmer.APQ11,
                              Shimmer.DDA,NHR,HNR,RPDE,DFA,PPE)

train_labels <- select(train_dataset,motor_UPDRS)
test_data <- test_dataset %>% select(sex.0,sex.1,age, test_time,Jitter,Jitter.Abs,Jitter.PPQ5,Jitter.DDP,
                              Shimmer, Shimmer.dB,Shimmer.APQ3,Shimmer.APQ11,
                              Shimmer.DDA,NHR,HNR,RPDE,DFA,PPE)
test_labels <- select(test_dataset,motor_UPDRS)

Normalize the data

# Normalize the data by subtracting the mean and dividing by the standard deviation
normalize<-function(x) {
  y<-(x - mean(x)) / sd(x)
  return(y)
}

normalized_train_data <-apply(train_data,2,normalize)
# Convert to matrix
train_labels <- as.matrix(train_labels)
normalized_test_data <- apply(test_data,2,normalize)
test_labels <- as.matrix(test_labels)

Create the Deep Learning Model

model <- keras_model_sequential()
model %>% 
  layer_dense(units = 6, activation = 'relu', input_shape = dim(normalized_train_data)[2]) %>% 
  layer_dense(units = 9, activation = 'relu') %>%
  layer_dense(units = 6, activation = 'relu') %>%
  layer_dense(units = 1)

# Set the metrics required to be Mean Absolute Error and Mean Squared Error.For regression, the loss is 
# mean_squared_error
model %>% compile(
  loss = 'mean_squared_error',
  optimizer = optimizer_rmsprop(),
  metrics = c('mean_absolute_error','mean_squared_error')
)

# Fit the model
# Use the test data for validation
history <- model %>% fit(
  normalized_train_data, train_labels, 
  epochs = 30, batch_size = 128, 
  validation_data = list(normalized_test_data,test_labels)
)

Plot mean squared error, mean absolute error and loss for training data and test data

plot(history)

Fig1

NnMnIzQ.png!web

2. Binary classification in Tensorflow – Python

This is a simple binary classification problem from UCI Machine Learning repository and deals with data on Breast cancer from the Univ. of Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Set bold text

In [31]:

import tensorflow as tf
from tensorflow import keras
import pandas as pd
# Read the data set from UCI ML site
dataset_path = keras.utils.get_file("breast-cancer-wisconsin.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data")
raw_dataset = pd.read_csv(dataset_path, sep=",", na_values = "?", skipinitialspace=True,)
dataset = raw_dataset.copy()

#Check for Null and drop
dataset.isna().sum()
dataset = dataset.dropna()
dataset.isna().sum()

# Set the column names
dataset.columns = ["id","thickness",	"cellsize",	"cellshape","adhesion","epicellsize",
                    "barenuclei","chromatin","normalnucleoli","mitoses","class"]
dataset.head()
Downloading data from https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data
24576/19889 [=====================================] - 0s 1us/step
id	thickness	cellsize	cellshape	adhesion	epicellsize	barenuclei	chromatin	normalnucleoli	mitoses	class
0	1002945	5	4	4	5	7	10.0	3	2	1	2
1	1015425	3	1	1	1	2	2.0	3	1	1	2
2	1016277	6	8	8	1	3	4.0	3	7	1	2
3	1017023	4	1	1	3	2	1.0	3	1	1	2
4	1017122	8	10	10	8	7	10.0	9	7	1	4

In [34]:

Out[34]:

count mean std min 25% 50% 75% max thickness 546.0 4.430403 2.812768 1.0 2.0 4.0 6.0 10.0 cellsize 546.0 3.179487 3.083668 1.0 1.0 1.0 5.0 10.0 cellshape 546.0 3.225275 3.005588 1.0 1.0 1.0 5.0 10.0 adhesion 546.0 2.921245 2.937144 1.0 1.0 1.0 4.0 10.0 epicellsize 546.0 3.261905 2.252643 1.0 2.0 2.0 4.0 10.0 barenuclei 546.0 3.560440 3.651946 1.0 1.0 1.0 7.0 10.0 chromatin 546.0 3.483516 2.492687 1.0 2.0 3.0 5.0 10.0 normalnucleoli 546.0 2.875458 3.064305 1.0 1.0 1.0 4.0 10.0 mitoses 546.0 1.609890 1.736762 1.0 1.0 1.0 1.0 10.0

In [0]:

In [0]:

In [0]:

In [0]:

In [55]:

loss acc val_loss val_acc epoch 995 0.112499 0.992674 0.454739 0.970588 995 996 0.112499 0.992674 0.454739 0.970588 996 997 0.112499 0.992674 0.454739 0.970588 997 998 0.112499 0.992674 0.454739 0.970588 998 999 0.112499 0.992674 0.454739 0.970588 999

In [58]:

2a. Binary classification in Tensorflow -R

This is a simple binary classification problem from UCI Machine Learning repository and deals with data on Breast cancer from the Univ. of Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Set

Normalize the data

Create the Deep Learning model

Fit the model. Use 20% of data for validation

Plot the accuracy and loss for training and validation data

VfaaYbQ.png!web

3. MNIST in Tensorflow – Python

This takes the famous MNIST handwritten digits . It ca be seen that Tensorflow and Keras make short work of this famous problem of the late 1980s

In [61]:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK