5

Machine Learning with ML.NET - Introduction

 3 years ago
source link: https://rubikscode.net/2021/01/04/machine-learning-with-ml-net-introduction/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Machine Learning with ML.NET – Introduction

Jan 4, 2021 | .NET, AI, Machine Learning, Python | 2 comments

Last month Microsoft released a new version of .NET – .NET 5. One of the main areas in which Microsoft tried to improve is the way that Machine Learning and Artificial Intelligence applications are developed. Until recently, it was not that easy to develop a machine learning model and utilize it within the .NET application. To be more exact, the first version of Microsoft’s Machine Learning framework ML.NET was released in 2018. Up until that point, developers usually created models using Python and then utilize them with some other framework or library. For example, you could create and train a model with TensorFlow and then integrate it with TensorFlowShapr. However, you couldn’t train the model in the .NET ecosystem. ML.NET changed all that and it became a part of .NET Core 3.

Are you afraid that AI might take your job? Make sure you are the one who is building it.

STAY RELEVANT IN THE RISING AI INDUSTRY! 🖖

1. Why Machine Learning?

Before we dive into the problems ahead of us, let’s take a moment and reflect on why is Microsoft introducing this technology and why should we consider studying machine learning today. Well, there are several reasons for that. The first one is that the technology is crossing the chasm. We are moving from the period where this was some obscure science that only a few practiced and understood to a place where it is a norm. Today we can build our own machine learning models that could solve real at our own home. Python and R became leading languages in this area and they already have a wide range of libraries.

The other reason why you should consider exploring machine learning, deep learning, and data science is the fact that we are producing a lot of data. We as humans are not able to process that data and make the science of it, but machine learning models can. The statistics say that from the beginning of time up until 2005. humans have produced 130 Exabytes of data. Exabyte is a real word, by the way, I checked it 🙂 Basically if you scale up from Terabyte you get Petabyte and when you scale from Petabyte you get Exabyte.

What is interesting is that from that moment up until 2010. we produced 1200 Exabytes of data, and until 2015. we produced 7900 Exabytes of data. Predictions for the future are telling us that there will be only more and more data and that 2025. we will have 193 Zettabytes of data, which is one level above Exabyte. In a nutshell, we are having more and more data, and more and more practical use of it.

Another interesting fact is that the ideas behind machine learning are going long back. You can observe it as this weird steampunk science because the concepts we use and explore today are based on some “ancient” knowledge. Byes theorem was presented in 1763, and Markov’s chains in 1913. The idea of a learning machine can be traced back to the 50s, to the Turing’s Learning Machine and Frank Rosenbllat’s Perceptron. This means that we have 50+ years of knowledge to back us up. To sum it up, we are at a specific point in history, where we have a lot of knowledge, we have a lot of data and we have the technology. So it is up to us to use those tools as best as we can.

2. What is Machine Learning?

Machine Learning is considered to be a sub-branch of artificial intelligence, and it uses statistical techniques to give computers the ability to learn how to solve certain problems, instead of explicitly programming it. The main idea is to develop a mathematical model that will be able to make some predictions based on a collection of examples of some phenomenon. This model is usually trained on some data beforehand. In a nutshell, the mathematical model uses insights made on old data to make predictions on new data. This whole process is called predictive modeling. If we put it mathematically we are trying to approximate a mapping function – f from input variables X to output variables y.

There are two big groups of problems we are trying to solve using this approach: Regression and Classification. Regression problems require the prediction of the quantity. This means our output is continuous, real-value, usually an integer or floating-point value. For example, we want to predict the price of the company shares based on the data from the past couple of months. Classification problems are a bit different. They are trying to divide input into certain categories. This means that the output of these tasks is discrete. For example, we are trying to predict the class of fruit based on its dimensions.

3. Types of Machine Learning

One of the most important concepts that we have brought up is the training or learning process. This is a necessary step for every machine learning algorithm during which the algorithm uses the data to learn how to solve the task at hand. In practice, we usually have some collected data based on which we need to create our predictions, or classification, or any other processing. This data is called a training set

As we were able to see based on behavior during the training and the nature of the training set, we have a few types of learning:

  • Unsupervised learning – The training set contains only inputs. The network attempts to identify similar inputs and to put them into categories. This type of learning is biologically motivated but it is not suitable for all the problems.
  • Supervised learning – The training set contains inputs and desired outputs. This way the network can check its calculated output the same as the desired output and take appropriate actions based on that. In this article, we focus on this type of learning since it is used the most in the industry.
  • Reinforcement learning – The training set contains inputs, but the network is also provided with additional information during the training. What happens is that once the network calculates the output for one of the inputs, we provide information that indicates whether the result was right or wrong and possibly, the nature of the mistake that the network made. Sort of like the concept of reward and punishment for artificial neural networks. This concept is very interesting, but it is out of the scope of this book, so, in general, we will have only the first two types of training.

4. Anatomy of Machine Learning Algorithm

Let’s take a moment to point out the main building blocks of a machine learning algorithm. The complete process of machine learning can be observed as a pipeline. At the beginning of that pipeline is, of course, data. Data can come from different sources. It can be generated by other systems or it can be created by humans. Sometimes we use web scrappers to get the data from the web, while other times data is available in some database. More often than not, this data is unstructured and sparse. This can be a problem for algorithms, that is why the second step of the pipeline is pre-processing of the data and feature engineering. This is the step is necessary for standard machine learning algorithms, while more advanced approaches, like deep learning, don’t need it.

After data is preprocessed and prepared, we feed it to our machine learning algorithm. This is called the training process, during which the algorithm needs to learn. The machine learning algorithm itself contains different parameters within itself which make this learning possible. What do we mean by this? Well, based on these input values the algorithm creates predictions – output values. However, those predictions are not the only output that this algorithm provides.

In supervised learning, we have the expected output, so we can use it to calculate how much predictions deviate from the expected results. For this, we can use different functions and measure the penalty in different ways. This function is called the loss function and the goal of the algorithm is to minimize the penalty calculated by this function. In mathematics, the expression we minimize or maximize is called an objective function.

Then we use the calculated penalty to change the parameters of the machine learning algorithm to get better predictions next time. In general, this is done by minimizing the penalty we calculated. That is how the machine learning algorithm, not only produces the output values during the training process, but it changes itself so it can improve its predictions. That is how the implicit output of a machine learning algorithm is the algorithm itself. The trained algorithm is usually called the model. This name comes from mathematics.

5. Starting with ML.NET

ML.NET is Microsoft’s machine learning framework that provides an easy way to train, create, and run models within the .NET ecosystem. This is very good news for .NET Developers since it lets you re-use all the knowledge, skills, code, and libraries you already have as a .NET Developer. However, this is not just a framework for .NET Developers. In fact, ML.NET proved itself to be a great end-to-end tool, which gives the ability to any developer to create complex pipelines and bind to different data sources. At the moment of writing this article, ML.NET got to version 1.5.4. 

If you want to use ML.NET in your project, you need to have at least .NET Core 2.0, so make sure you have it installed on your computer. We recommended you use at least .NET Core 3 or .NET 5. The other thing you should know is that it currently must run in the 64-bit process. Keep this in mind while making your .NET Core project. Installation is pretty straight forward with the Package Manager Console.

Decision Tree

All you have to use is the command:

Install-Package Microsoft.ML 

This can be achieved by using .NET CLI. If you are going to do it this way, make sure you have installed .NET SDK and run this command:

dotnet add package Microsoft.ML

Apart from that, it is useful to install Microsoft.ML.DataView in the same way. Alternatively, you can use Visual Studio’s Manage NuGetPackage option:

Decision Tree

6. Architecture and High Level Overview

The main goal of ML.NET is to provide an easy way to build complex end-to-end pipelines, from steps that can transform and featurize raw data to training machine learning models and deploying them into other systems.

As we already mentioned, ML.NET was designed to be intuitive for .NET developers. That is why you will encounter concepts and patterns that could be found in other frameworks such as ASP.NET and Entity framework. The core of ML.NET can be found within two classes MLContext and DataView. The MLContext class is a singleton class, and its object provides access to most of the ML.NET functionalities, like various machine learning algorithms which are called trainers in the context of ML.NET.

The DataView class is an abstraction borrowed from relational database management systems. This class provides compositional processing of schematized data while being able to gracefully and efficiently handle high dimensional data in datasets larger than main memory. In a nutshell, this class is the reason why ML.NET is so fast.

Decision Tree

Building an application with ML.NET consists of several steps:

  • Loading Data – Raw data must be loaded into memory and for this IDataView is used.
  • Creating a pipeline – The pipeline is composed of steps that either transform data or train a machine learning algorithm. ML.NET provides various transformational steps, like one-hot encoding and various machine learning algorithms.
  • Training a machine learning model – Once the pipeline is created, the training can be started. This is done using the Fit() method that is supported in all algorithms.
  • Evaluate – The model can be evaluated at any point and additional decisions can be made based on the evaluations.
  • Save – Once trained, the model is saved into a file. In general, the complete application should be built in a way that one microservice trains and evaluates the machine learning model, and the other microservice utilizes it.
  • Load – The machine learning model can be loaded and utilized for predictions.

Apart from the mentioned classes, there are several more components that we need to mention. The Estimator is the object we create during the creation of the pipeline. This model is not trained. The Transformer instance, on the other hand, is a trained model and it is also in charge of loading the model back into the memory.

Decision Tree

7. Simple Example

In this simple example, we can see how we can build one ML.NET pipeline and train a machine learning algorithm:

using System;
using Microsoft.ML;
using Microsoft.ML.Data;

class Program
{
   public class HouseData
   {
       public float Size { get; set; }
       public float Price { get; set; }
   }

   public class Prediction
   {
       [ColumnName("Score")]
       public float Price { get; set; }
   }

   static void Main(string[] args)
   {
       MLContext mlContext = new MLContext();

       // 1. Load Data
       HouseData[] houseData = {
           new HouseData() { Size = 1.1F, Price = 1.2F },
           new HouseData() { Size = 1.9F, Price = 2.3F },
           new HouseData() { Size = 2.8F, Price = 3.0F },
           new HouseData() { Size = 3.4F, Price = 3.7F } };
       IDataView trainingData = mlContext.Data.LoadFromEnumerable(houseData);

       // 2. Create pipeline
       var pipeline = mlContext.Transforms.Concatenate("Features", new[] { "Size" })
           .Append(mlContext.Regression.Trainers.Sdca(labelColumnName: "Price", maximumNumberOfIterations: 100));

       // 3. Train model
       var model = pipeline.Fit(trainingData);

       // 4. Make a prediction and evaluate
       var size = new HouseData() { Size = 2.5F };
       var price = mlContext.Model.CreatePredictionEngine<HouseData, Prediction>(model).Predict(size);

       Console.WriteLine($"Predicted price for size: {size.Size*1000} sq ft= {price.Price*100:C}k");
   }
}

In this simple example, we first create an MLContext instance. Then we create an array of HouseData, a class that we defined beforehand. This is an example, in the real world, we would load data from a text file or from an existing database. Then we load that data in memory. Note that we used the LoadFromEnumerable() method, but there are other methods using which we can integrate with different systems. Then we create the pipeline. Here we use the Append() method to add different transformations and machine learning algorithms. In this particular case, we used the SDCA regression algorithm, which is a version of Linear Regression. Then we train the model using the Fit() method and finally, use it to make new predictions. 

8. Why should you use ML.NET?

In the end, let’s just see why should we consider ML.NET for our project. As it turned out ML.NET has really good performance. In fact, ML.NET trained a sentiment analysis model with 95% accuracy using a 9GB Amazon review data set. Other popular machine learning frameworks failed to process the dataset due to memory errors. Training on 10% of the data set, to let all the frameworks complete training, ML.NET demonstrated the highest speed and accuracy. The performance evaluation found similar results in other machine learning scenarios. Apart from that, ML.NET is easily extendable with different models from different technologies.

Conclusion

In this article, we started our journey through machine learning with ML.NET. In the following articles, we will cover different machine learning topics from the point of .NET developers and implement them using ML.NET.

Thanks for reading!

Nikola M. Zivkovic

CAIO at Rubik's Code

Nikola M. Zivkovic a CAIO at Rubik’s Code and the author of book “Deep Learning for Programmers“. He is loves knowledge sharing, and he is experienced speaker. You can find him speaking at meetups, conferences and as a guest lecturer at the University of Novi Sad.

Rubik’s Code is a boutique data science and software service company with more than 10 years of experience in Machine Learning, Artificial Intelligence & Software development. Check out the services we provide.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK