36

v0.12.0: · sekwiatkowski/komputation@800d713 · GitHub

 6 years ago
source link: https://github.com/sekwiatkowski/komputation/commit/800d713f48e469a8a64d2c04760fda9f6e2e4af5
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

· sekwiatkowski/komputation@800d713 · GitHub

This repository has been archived by the owner on Nov 3, 2020. It is now read-only.

Permalink

Browse files Browse the repository at this point in the history

v0.12.0:
- Simplified the specification of networks
- The input dimensions over the continuations of the network are computed automatically.
- Removed the Layer suffix from instruction factory functions
- Overloaded the instruction factory function to simplify the specification of initialization strategies
- Renamed Direction.Forward/Backward to Direction.LeftToRight/RightToLeft
- Shortened "ActivationFunction" to "Activation" and "ActivationLayer" to "Activation"
- Generalized BaseCudaEntrywiseActivationLayer to BaseCudaEntrywiseLayer
- The specification of the minimum length is required in the lookup instruction and optional in the input instruction.
- TREC categories are indexed based on all available training data.
- Renamed "forward" layer to "continuation" and shortened "combination layer" to "combination"
- Moved the architecture-specific interfaces from the general package to the respective architecture-specific packages
- Improved the names used in SparseAccumulator and SparseUpdate
- The series is passed on to the method of the ResultExtractionStrategy interface.
- Introduced CpuCombinationSeries to implement the addition of the weighted previous state and the weighted current input.
- Added the Cpu prefix to Series and ParameterizedSeries in preparation of the CUDA implementation of recurrent neural networks
- Optimized the performance RNN implementation by adding the bias to the input rather than adding at each step
- Fixed the specification of the number of rows in CpuLogisticLoss
- Renamed the "Negation" demo to "Not"
- Stopped experimenting with dynamic parallelism
- CudaIdentity now implements CudaActivation.
- Introduced a base class for higher-order layers
- Differentiated the CUDA continuation base class into one class for layers that change the number of columns and one class for layers that don't.
- Reused the code for the computation of launch configurations in CudaHashing and CudaGroupSum
- Fixed the sparse updated in CudaLookup
- Added a "copy" helper function that encapsulates System.arraycopy for copies
- Added a setter to InputMemory that caches all possible data
- Clarified references to the hash table in CUDA optimizers
- CUDA layers pass a pointer to the length of the input data and the maximum length within the batch.
- Unified the activation instruction factory functions over the two architectures
- Moved the concatenation layer to a separate package
- Added an instruction for weightings with shared parameters that is separate from the instruction for the weighting layer that uses a dedicated parameter
- The two weighting instructions inherit from the new BaseWeighting class.
- Added instructions for the tree series types: Series, ParameterizedSeries and CombinationSeries
- Refactored the CPU RNN factory function based on the instructions
- Continuation instructions implement HasOutputDimensions and CanSetInputDimensions, while entry point instructions only implement HasOutputDimensions.
- Inlined some CUDA C helper functions
- Moved the division by 2 in the squared loss function from the host to the device
- Added the missing scaling of gradients in some of the optimization kernels
- Refactored the for loops used to update entries in optimization kernels
- Temporarily removed the CUDA forward layer tests
- Updated the links in the README
- Upgraded to Kotlin 1.2.10
aisummary committed Dec 24, 2017
1 parent f38a845 commit 800d713

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK