48

GitHub - PipelineAI/pipeline: PipelineAI: Real-Time Enterprise AI Platform

 5 years ago
source link: https://github.com/PipelineAI/pipeline
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

PipelineAI Logo

PipelineAI Quick Start (CPU + GPU)

Train and Deploy your ML and AI Models in the Following Environments:

Having Issues? Contact Us Anytime... We're Always Awake.

PipelineAI Community Events

PipelineAI Products

Consistent, Immutable, Reproducible Model Runtimes

Consistent Model Environments

Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.

Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.

Sample Machine Learning and AI Models

Click HERE to view model samples for the following:

  • Scikit-Learn
  • TensorFlow
  • Keras
  • Spark ML (formerly called Spark MLlib)
  • XGBoost
  • PyTorch
  • Caffe/2
  • Theano
  • MXNet
  • PMML/PFA
  • Custom Java/Python/C++ Ensembles

Nvidia GPU TensorFlow

Spark ML Scikit-Learn

R PMML

Xgboost Model Ensembles

Supported Model Runtimes (CPU and GPU)

  • Python (Scikit, TensorFlow, etc)
  • Java
  • Scala
  • Spark ML
  • C++
  • Caffe2
  • Theano
  • TensorFlow Serving
  • Nvidia TensorRT (TensorFlow, Caffe2)
  • MXNet
  • CNTK
  • ONNX

Supported Streaming Engines

  • Kafka
  • Kinesis
  • Flink
  • Spark Streaming
  • Heron
  • Storm

Drag N' Drop Model Deploy

PipelineAI Drag n' Drop Model Deploy UI

Generate Optimize Model Versions Upon Upload

Automatic Model Optimization and Native Code Generation

Distributed Model Training and Hyper-Parameter Tuning

PipelineAI Advanced Model Training UI

PipelineAI Advanced Model Training UI 2

Continuously Deploy Models to Clusters of PipelineAI Servers

PipelineAI Weavescope Kubernetes Cluster

View Real-Time Prediction Stream

Live Stream Predictions

Compare Both Offline (Batch) and Real-Time Model Performance

PipelineAI Model Comparison

Compare Response Time, Throughput, and Cost-Per-Prediction

PipelineAI Compare Performance and Cost Per Prediction

Shift Live Traffic to Maximize Revenue and Minimize Cost

PipelineAI Traffic Shift Multi-armed Bandit Maximize Revenue Minimize Cost

Continuously Fix Borderline Predictions through Crowd Sourcing

Borderline Prediction Fixing and Crowd Sourcing


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK