50

Distributed Filesystems for Deep Learning

 5 years ago
source link: https://www.tuicool.com/articles/hit/yEZny2i
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

More training data gives predictable gains in prediction accuracy

tl;dr When you train deep learning models with lots of high quality training data, you can beat state-of-the-art prediction models in a wide array of domains (image classification, voice recognition, and machine translation). Distributed filesystems are becoming increasingly indispensable as a central store for training data, logs, model serving, and checkpoints. HopsFS is a great choice, as it has native support for the main Python frameworks for Data Science: Pandas, TensorFlow/Keras, PySpark, and Arrow.

Prediction Performance Improves Predictably with Dataset Size

Baidu showed that the improvement in prediction accuracy (or reduction in generalization error) for deep learning models was predictable based on the amount of training data. The decrease in generalization error with increasing training dataset size follows a power-law distribution(as seen by the straight lines in the log-log graph below). This astonishing result came from a large-scale study in the different application domains of machine translation, language modeling, image classification, and speech recognition. Given that this result holds true in vastly different application domains, there is a good chance the same result holds true for your particular application domain. This result is important for companies considering investing in deep learning – if it costs $X to collect or generate a new GB of high quality training data, you can predict the improvement of prediction accuracy for your model, given the slope, Y, of the log-log graph you have observed while training.

vQvuq2E.png!web

[Baidu Research http://research.baidu.com/deep-learning-scaling-predictable-empirically/ ]

Predictable ROI in the Power-Law Region

This predictable return-on-investment (ROI) for collecting/generating more training data is slightly more complex that the one described above. You first need to collect enough training data to get beyond the “Small Data Region” in the diagram below. That is, you can only make predictions if you have enough data that you are in the “Power-Law Region”.

fIfYr23.png!web

[Baidu18 https://arxiv.org/pdf/1712.00409.pdf ]

You can determine this by graphing the reduction in your generalization error as a function of your training data size on a log-log scale. After you start observing the straight line on your model, calculate the exponent of your power-law graph (the slope of the graph). Baidu’s empirically-collected learning curves showed exponents in the range [-0.35, -0.07] – suggesting models learn real-world data more slowly than suggested by theory (theoretical models indicate the power-law exponent is expected to be -0.5).

Still, if you observe the power-law region, increasing your training data set size will give you a predictable decrease in generalization error. For example, if you are training an image classifier for a self-driving vehicle, the number of hours your cars have driven autonomously determines your training data size. So, going from 2m hours to 6m hours of autonomous driving should reduce errors in your image classifier by a predictable amount. This is important in giving businesses a level of certainty in the improvements they can expect when making large investments in new data collection or generation.

Need for a Distributed Filesystem

The TensorFlow team say a distributed filesystem is a must for deep learning . Datasets are getting larger, GPUs are disaggregated from storage, workers with GPUs need to coordinate for model checkpointing, hyperparameter optimization, and model-architecture search. Your system may grow beyond a single server, or you may have different servers for serving your models from the servers you have for training your models. A distributed filesystem is the glue that holds together the different stages of your machine learning workflows, and it enables teams to share both GPU hardware and data. What is important is that the distributed filesystem works with your choice of programming language and deep learning framework(s).

NRJjEze.png!web

A distributed filesystem is needed for managing logs, tensorboard, coordinating GPUs for experiments, storing checkpoints during training, and storing/serving models.

HopsFS is a great choice as a distributed filesystem, due to it being a drop-in replacement for HDFS. HopsFS/HDFS are supported in major Python frameworks: Pandas, PySpark DataFrames, TensorFlow Data, and so on. InHopsworks, we provide built-in HopsFS/HDFS support with the pydoop library. HopsFS has one additional feature that is aimed at machine learning workloads: improved throughput and lower latency reading/writing for small files. In a peer reviewed paper at Middleware 2018 , we showed throughput improvements of up to 66X compared to HDFS for small files.

B3IZbu2.png!web

Python Support in Distributed Filesystems

As we can see from the table below, the choice of distributed filesystem will affect how you can

Filesystem/Store PySpark Pandas TensorFlow NVMe Support HDFS / HopsFS

:heavy_check_mark:

:heavy_check_mark:

:heavy_check_mark:

:heavy_check_mark:

Ceph / GlusterFS

:x:

:x:

:x:

:x:

Swift

:x:

:x:

:x:

:x:

S3 / GCS

:heavy_check_mark:

:heavy_check_mark:

:heavy_check_mark:

:x:

Local FS

:heavy_check_mark:

:heavy_check_mark:

:heavy_check_mark:

:heavy_check_mark:

Python Support in HopsFS

We now give some simple examples of how to write Python code to use datasets in HopsFS. Complete notebooks can be found here .

Pandas with HopsFS

import  hops.hdfs as

hdfs

cols = [

“Age” , “Occupation” , “Sex” , …, “Country”

]

h = hdfs.get_fs()

with  h.open_file(hdfs.project_path()+ “/TestJob/data/census/adult.data” , “r” ) as

f:

train_data=pd.read_csv(f, names=cols, sep=

r’\s*,\s*’ ,engine= ‘python’ ,na_values= “?” )

In Pandas, the only change we need to make to our code, compared to a local filesystem, is to replace open_file(..) with h.open_file(..) , where h is a file handle to HDFS/HopsFS.

PySpark with HopsFS

from  mmlspark import

ImageTransformer

IMAGE_PATH=

“/Projects/myProj/Resources/imgs” images = spark.readImages(IMAGE_PATH, recursive = True , sampleRatio = 0.1

).cache()

tr = (ImageTransformer().setOutputCol(

“transformed”

)

.resize(height =

200 , width = 200

)

.crop(

0 , 0 , height = 180 , width = 180

) )

smallImgs = tr.transform(images).select(

“transformed”

)

smallImgs.write.save(

“/Projects/myProj/Resources/small_imgs” , format= “parquet” )

TensorFlow Datasets with HopsFS

def   input_fn

(batch_sz):

files = tf.data.Dataset.list_files(IMAGE_PATH)

def   tfrecord_dataset

(f):

return  tf.data.TFRecordDataset(f, num_parallel_reads= 32 , buffer_size= 8 * 1024 * 1024

)

dataset = files.apply(tf.data.parallel_interleave(tfrecord_dataset,cycle_length=

32

))

dataset = dataset.prefetch(

4

)

return  dataset

References


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK