29

Fastai is now using Python and PyTorch to be productive and hackable.

 4 years ago
source link: https://towardsdatascience.com/fastai-is-now-using-python-and-pytorch-to-be-productive-and-hackable-574a7289de3c?gi=f125a36b849c
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Fastai is now using Python and PyTorch to be productive and hackable.

A recent paper published by fastai states that the use of Python and PyTorch is helping them to be quick and clear.

RJFbYzR.jpg!web

Image by alan9187 from Pixabay

Organised around two major design goals, fastai is a modern deep learning library that simplifies training of fast and accurate neural nets. The two major design goals of fastai are

  1. To be approachable and rapidly productive.
  2. To be deeply hackable and configurable.

The framework provides engineers with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. The new version of fastai, fastai v2 , that is expected to be released officially around July 2020, uses the dynamic nature of Python Language and flexibility of PyTorch to be concise and clear. The library is specifically designed with ease of use, flexibility, and performance in mind.

Architecture

Fastai’s carefully layered architecture holds the key in its being productive and configurable. Most of the modern day deep learning libraries focuses on either one of this, but fastai is specifically designed for being both at the same time. The team wanted to get the clarity and development speed of Keras and the customizability of PyTorch.

Fastai uses decoupled abstractions to represent underlying patterns of many deep learning and data processing techniques which creates its layered architecture. This helps fastai to achieve best of both worlds. There is a high level API which can be called by ready-to-use functions to train models for various applications. This high level API is build on top of multiple composable low level APIs, which can be switched and swapped as per the need for particular behaviour.

Following diagram shows a layered API from fast.ai

yeMfmmA.png!web

Users of the API can either use the High Level API to train a model for common applications or they have the option to play with Mid Level or Low Level APIs if they want to hack into a more custom solution.

Beginners and to practitioners will be able widely using high-level of the API. It offers concise APIs over four main application areas: vision, text, tabular and time-series analysis, and collaborative filtering. All of these application areas are highly optimised for ease of use with maximum benefits as these APIs choose intelligent default values and behaviours based on all available information. This use of intelligent defaults–based on system’s experience or best practices–extends to incorporating state-of-the-art research wherever possible. This means that beginners with less knowledge about the system will be able to train models which are of top level research quality.

The mid-level API is designed for scalability and customisation provides the core deep learning and data-processing methods for each of these applications. The mid level API makes sure that the low level API will not become too cluttered too fast as in the case of may two layered frameworks. Also it provides a layer of abstraction for any one who wants to customise only high level API without having to learn a lot about the low level APIs

The low-level APIs provide a library of optimized primitives and functional and object-oriented foundations, which allows the mid-level to be developed and customised.

Mid Level APIs and Low Level APIs makes more sense for researchers and is designed ins such away that they can exploit most, if not all, of the capabilities of underlying language and framework.

Getting the most out of Phyton and PyTorch

Build on top of different Phyton based libraries such as PyTorch, NumPy, PIL, pandas, and various other libraries, in order to achieve its goal of hackability, the library does’t aim to supplant or hide these lower levels or these foundation. For instance, in a fastai model, developer can interact directly with the underlying PyTorch primitives; and within a PyTorch model, one can incrementally adopt components from the fastai library as conveniences rather than as an integrated package.

That means this is really powerful for research and related tasks as there are a lot ways we can experiment leveraging current tools and frameworks with out making things complex.

In same lines, rather than keeping Phyton itself as the low-level of computation, fastai depends on a layer of well defined abstraction at the lower level. The mid level APIs depend on this lower level APIs for functionalities. Along with this fastai has a few more additions designed to make Python easier to use, including a NumPy-like API for lists called L , and some decorators to make delegation or patching easier.

This means that Python is used in places where it can provide value to the users of the library and provide benefits to the framework. For instance, the transform pipeline system is built on top of the foundations provided by PyTorch. But the design of the framework itself is in such a way that the language or language based libraries will not be bottle neck for the customisation for a new solution.

Conclusion:

Fastai seems incredibly promising as a library which can improve productivity and customisation at the same time as the team says

We believe fastai meets its design goals. A user can create and train a state-of-the-art vision model using transfer learning with four understandable lines of code.

Intelligent layered architecture of the system provide a way to use the capabilities of language and language provided APIs to greater extends while keeping the system stable and easy to maintain. This results in faster turnaround times.

Early results from using fastai are very positive. We have used the fastai library to rewrite the entire fast.ai course “Practical Deep Learning for Coders”, which contains 14 hours of material, across seven modules, and covers all the applications described in this paper

The library seems tempting for both researches and engineers at the same time.

Please read f ull paper from fastai .

Thanks for your time.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK