48
GitHub - PipelineAI/pipeline: PipelineAI: Real-Time Enterprise AI Platform
source link: https://github.com/PipelineAI/pipeline
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
PipelineAI Quick Start (CPU + GPU)
Train and Deploy your ML and AI Models in the Following Environments:
Having Issues? Contact Us Anytime... We're Always Awake.
- Slack: https://joinslack.pipeline.ai
- Email: [email protected]
- Web: https://support.pipeline.ai
- YouTube: https://youtube.pipeline.ai
- Slideshare: https://slideshare.pipeline.ai
- Workshop: https://workshop.pipeline.ai
- Troubleshooting Guide
PipelineAI Community Events
- PipelineAI Monthly Webinar (TensorFlow + Spark + GPUs + TPUs)
- Advanced Spark and TensorFlow Meetup (Global)
PipelineAI Products
Consistent, Immutable, Reproducible Model Runtimes
Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.
Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
Sample Machine Learning and AI Models
Click HERE to view model samples for the following:
- Scikit-Learn
- TensorFlow
- Keras
- Spark ML (formerly called Spark MLlib)
- XGBoost
- PyTorch
- Caffe/2
- Theano
- MXNet
- PMML/PFA
- Custom Java/Python/C++ Ensembles
Supported Model Runtimes (CPU and GPU)
- Python (Scikit, TensorFlow, etc)
- Java
- Scala
- Spark ML
- C++
- Caffe2
- Theano
- TensorFlow Serving
- Nvidia TensorRT (TensorFlow, Caffe2)
- MXNet
- CNTK
- ONNX
Supported Streaming Engines
- Kafka
- Kinesis
- Flink
- Spark Streaming
- Heron
- Storm
Drag N' Drop Model Deploy
Generate Optimize Model Versions Upon Upload
Distributed Model Training and Hyper-Parameter Tuning
Continuously Deploy Models to Clusters of PipelineAI Servers
View Real-Time Prediction Stream
Compare Both Offline (Batch) and Real-Time Model Performance
Compare Response Time, Throughput, and Cost-Per-Prediction
Shift Live Traffic to Maximize Revenue and Minimize Cost
Continuously Fix Borderline Predictions through Crowd Sourcing
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK