GitHub - cloudkj/layer: Neural network inference the Unix way
source link: https://github.com/cloudkj/layer
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
layer - neural network inference from the command line
layer
is a program for doing neural network inference the Unix way. Many
modern neural network operations can be represented as sequential,
unidirectional streams of data processed by pipelines of filters.
The computations at each layer in these neural networks are equivalent to an
invocation of the layer
program, and multiple invocations can be chained
together to represent the entirety of such networks.
For example, performing inference on a neural network with two fully-connected layers might look something like this:
cat input | layer full -w w.1 --input-shape=2 -f tanh | layer full -w w.2 --input-shape=3 -f sigmoid
layer
applies the Unix philosophy to neural network inference. Each type of
a neural network layer is a distinct subcommand. Simple text streams of
delimited numeric values serve as the interface between different layers of a
neural network. Each invocation of layer
does one thing: it feeds the numeric
input values forward through an instantiation of a neural network layer, then
emits the resulting output numeric values.
Usage
Example: a convolutional neural network for CIFAR-10.
$ cat cifar10_x.csv \ | layer convolutional -w w0.csv -b b0.csv --input-shape=32,32,3 --filter-shape=3,3 --num-filters=32 -f relu \ | layer convolutional -w w1.csv -b b1.csv --input-shape=30,30,32 --filter-shape=3,3 --num-filters=32 -f relu \ | layer pooling --input-shape=28,28,32 --filter-shape=2,2 --stride=2 -f max
Example: a multi-layer perceptron for XOR.
$ # Fully connected layer with three neurons echo "-2.35546875,-2.38671875,3.63671875,3.521484375,-2.255859375,-2.732421875" > layer1.weights echo "0.7958984375,0.291259765625,1.099609375" > layer1.biases $ # Fully connected layer with one neuron echo "-5.0625,-3.515625,-5.0625" > layer2.weights echo "1.74609375" > layer2.biases $ # Compute XOR for all possible binary inputs echo -e "0,0\n0,1\n1,0\n1,1" \ | layer full -w layer1.weights -b layer1.biases --input-shape=2 -f tanh \ | layer full -w layer2.weights -b layer2.biases --input-shape=3 -f sigmoid 0.00129012749948779 0.99147053740106 0.991243357927591 0.0111237568184365
Installation
Requirements: BLAS 3.6.0+
- Download a release
- Install BLAS 3.6.0+
- On Debian-based systems:
apt-get install -y libblas3
- On RPM-based system:
yum install -y blas
- On macOS 10.3+, BLAS is pre-installed as part of the Accelerate framework
- Unzip the release and run
[sudo] ./install.sh
, or manually relocate the binaries to the path of your choice.
About
layer
is currently implemented as a proof-of-concept and supports a limited
number of neural network layer types. The types of layers are currently limited
to feed-forward layers that can be modeled as sequential, unidirectional
pipelines.
Input values, weights and biases for parameterized layers, and output values are all read and written in row-major order, based on the shape parameters specified for each layer.
layer
is implemented in CHICKEN Scheme.
License
Copyright © 2018-2019
Recommend
-
165
README.md Tinn (Tiny Neural Network) is a 200 line dependency free neural network libra...
-
266
README.md
-
43
README.md
-
39
README.md SCAR: 1-click static website deployment on AWS Tired of reading outdated blog posts or combing through verbose AWS documentation just to figu...
-
3
Files Permalink Latest commit message Commit time
-
12
README.md
-
3
Recently, some of my work involved working with knowledge graphs. I was somewhat surprised to discover how sparse resources were on working with knowledge graphs. Most of the literature was locked in research papers that are relatively inacce...
-
4
Training vs Inference – Memory Consumption by...
-
3
Training vs Inference – Network Compr...
-
5
Workers AI: serverless GPU-powered inference on Cloudflare’s global network 09/27/2023 11 min read This post is also available in
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK