DeepLTK - Deep Learning Toolkit for LabVIEW
Release Notes
V8.0.1
General Description
This is a major update which is designed to speed up the toolkit's performance in GPU mode by introducing new data types for storing datasets of different numeric types.
​
Backward Compatibility
This version breaks backward compatibility with previous versions of the toolkit.
​
Features
-
Added support for new data types (I8, U8, I16, U16 and U32ARGB) for representing datasets. Inputs in dataset clusters are now represented as variants.
-
Added new API (polymorphic "NN_Variant_To_DVR.vi") to convert variant type of data in datasets to supported types of data DVRs.
-
"NN_Layer_Create(Input1D/3D).vi" is modified to allow specifying input data type. Input data type and dimensionality can also be automatically detected by providing dataset to "Dataset" input of the VI. Now data normalization can be calculated within the network by providing corresponding Shift(s) and Scaler(s) values to input layer at layer creation time.
-
"NN_Set_Input.vi" polymorphic VI is enhanced to support new data types. Previous instances have been renamed to conform with polymorphic VI name, which might cause a backward compatibility issue, in which case old VIs should be replaced with "NN_Set_Input.vi" polymorphic VI.
-
Added new API "NN_Get_Layer(byName).vi" to easily retrieve layers from network by their names.
-
Added new utility API "NN_Get_T_dt.vi" for simplified calculation of execution time.
-
Added Quick Drop Shortcut (Ctrl+Space, Ctrl+L) to display cluster element labels.
-
Added new "GPU_Info" tool to LabVIEW/Help/Ngene/DeepLTK menu to display information about available GPU/s and installed drivers. "NN_GPU_Check_Drivers.vi" is renamed to "NN_GPU_Get_Info.vi".
-
Added new API "NN_GPU_Reset.vi" to reset GPU when needed.
-
Added MNIST Dataset to the toolkit installer.
-
Added "NN_Dims(V,H).ctl" to API.
​​
Optimizations
-
Improved inference speed by optimizing CPU to GPU data transfer infrastructure.
-
Modified Xavier weight initializer so the "Value" parameter now scales the standard deviation of the distribution.
-
Removed deprecated "NN_get_Detections(YOLO_v2).vi" and "NN_get_Detections(YOLO_v2)(Batch).vi" from functions palette. "NN_Get_Detections.vi" and "NN_Get_Detections(Batch).vi" should be used instead.
-
Improved memory consumption for cases when Adam is used as optimizer.
-
Now default network workspace size is unlimited by default.
-
Improved error handling when providing batched data at the input during the inference. Now data batch size and network's batch size are compared.
-
Updated toolkit's VI priorities to improve the performance.
​
Bug Fixes
-
Fixed the bug so the SoftMax layer and Cross-Entropy loss functions can work independently.
-
Fixed a bug where calling "NN_Destroy.vi" would impact other training processes.
-
Fixed a training instability bug, where previous NaN results could affect next training session.
-
Fixed a bug when activation type of Activation layer was not correctly reflected in cfg file for 1D case.
​​
Other Changes
-
Updated help file.
-
Changed the connector pane type and pinout for NN_Eval.vi polymorphic instances.
​
V7.0.1
Backward Compatibility
This is a major update which breaks backward compatibility with v6.x.x versions of the toolkit.
​
Features
-
Added support for YOLO_v4 layer. The modifications are reflected in "NN_Layer_Create" and "NN_Set_Loss" API VIs.
-
Added API for calculating confusion matrix for object detection tasks.
-
Added new common API "NN_get_Detections.vi" for getting detections from YOLO_v2/4 layers. "NN_get_Detections(YOLO_v2).vi" will be deprecated.
-
Added support for Conv1D layer.
-
Added support for Activation layer.
-
Added Batch Normalization layer.
-
Added "Has_Bias" parameter in Convolutional and FC layers.
-
Added "epsilon" and "momentum" BN parameters in Convolutional and FC layers.
-
Added support for specifying input layer when creating layers of the network. This should support complex architectures like Wider ResNet.
-
Added new API "NN_Display_Confusion_Matrix.vi" for displaying confusion matrix in the table.
-
Added API "NN_Set_Max_GPU_WS_Size.vi" for controlling maximum GPU workspace memory size.
-
Now NN_Destroy.vi returns paths of generated ".cfg", ".bin" and ".svg" files.
-
Added support for Nvidia RTX 40xx generation of GPUs.
-
Deprecated support for older (Kepler) GPUs.
-
Removed "Out_Idx Ref." elements from NN_Dataset(xxx).ctl types.
​​
Optimizations
-
Optimized GPU memory utilization for intermediate buffers (workspaces).
-
Improved YOLO_v2 performance.
-
Improved the quality of minibatch sampling from dataset when sampling mode is set to "random". Now the distribution is uniform.
-
Updated NN_Eval to display more results for object detection tasks. Now it returns more accurate metrics (e.g. mAP@0.5, mAP@0.75, mAP@0.5-0.95, F1 score, etc).
-
SVG diagrams of network topologies now provide more information for configuration parameters of Conv1D and Conv2D layers.
​
Bug Fixes
-
Fixed mismatch of "epsilon" and "momentum" parameters of Batch Normalization function between CPU and GPU modes.
-
Fixed L1 decay/regularization bug in GPU mode.
-
Fixed an error occurring when calling BN Merge during the training with Adam optimizer.
-
Fixed bugs in GPU error handling.
-
Fixed an issue of missing DLL/s when creating installer based on the toolkit.
-
Fixed typos in Help file.​​​
​
V6.2.1
Backward Compatibility
This is a minor update which does not break backward compatibility with v6.x.x versions of the toolkit.
​
Features
-
Added ReLU6 activation function.
-
Removed the requirement for using Conditional Disable Symbol ("NNG") for enabling GPU acceleration.
-
Removed GPU specific examples.
​
Bug Fixes
-
Fixed a bug in batch normalization.
-
Fixed a bug in the calculation of the MSE loss.
-
Fixed a bug in mAP metric calculations for object detection.
-
Increased the maximum number of layers in the network from 100 to 500.
-
Other minor bug fixes
​
Other Changes
-
Updated help file.
-
Added link to examples on GitHub in example instructions.
​
V6.1.1
Backward Compatibility
This is a major update which does not break backward compatibility with v5.x.x version of the toolkit.
Features
-
Added support for Nvidia RTX 3xxx series of GPU by upgrading CUDA libraries.
-
Now CUDA libraries are part of the toolkit installer, which eliminates the need for separate installation of CUDA libraries.
-
Now all augmentation operations are accelerated on GPU, which greatly speeds up the training process when augmentations are enabled.
-
Support for older versions of LabVIEW is deprecated. LabVIEW 2020 and newer are supported starting with this release.
-
Improved DeepLTK library loading time in LabVIEW.
Enhancements
-
Improved DeepLTK library loading time in LabVIEW.
​
V5.1.1
Backward Compatibility
Important: This is major update of the toolkit, which breaks backward compatibility with previous (pre v5.x.x) versions of the toolkit.
Features
-
Redesigned the process for specifying and configuring Loss Function. Now setting and configuring Loss function and configuring the training process are separated. New separate API for setting loss function (NN_Set_Loss.vi) is added.
-
Modified “NN_Train_Params.ctl”.
-
Now loss function related parameters are removed from “NN_Train_Params.ctl”.
-
“Momentum” is replaced with “Beta_1” and “Beta_2” parameters for specifying first and second order momentum coefficients.
-
“Weight_Decay” is replaced with “Weight_Decay(L1)” and “Weight_Decay(L2)” for specifying L1 and L2 weight normalization.
-
-
“NN_Eval_Test_Error_Loss.vi” is deprecated. Its functionality is now split between NN_Predict.vi” and “NN_Eval.vi”
-
Added support for Adam optimizer.
-
Added support for Swish and Mish activation functions.
-
Con3D layer is now renamed to Conv2D.
-
Added advanced Conv2D layer (Conv2D_Adv), which supports for:
-
dilation
-
grouped convolution
-
non square kernel window dimensions
-
non-square stride sizes
-
different vertical and horizontal padding sizes
-
-
Modified Upsample layer configuration control (Upsample_cfg.ctl) to separate vertical and horizontal strides.
-
Added new network performance evaluation API (NN_Eval.vi)
-
Label_Idx is removed from “NN_Predict.vi”. Now classification predictions can be converted to categorical/binary labels with help of “NN_to_Categorical.vi”
-
Added new API for converting floating point predictions from network to categorical/binary (NN_to_Categorical.vi).
-
Added new API for categorical/binary labels to one-hot-encoded format (NN_to_OneHotEncoded.vi).
-
Now MaxPool and AvgPool layers support non square window dimensions.
-
Added new API control for 3D dimension representation (NN_Dims(C,H,W).ctl)
-
Region layer is now renamed to YOLO_v2.
-
Removed loss related configuration parameters (moved to NN_Set_Loss.vi configuration control).
-
Now anchor dimensions in YOLO_v2 layer should be provided in relative (to input image dimensions) format.
-
YOLO_v2 layer can automatically create last/preceding Conv2D layer, to match required number of classes and number of anchors.
-
-
Added support for Channel-Wise Cross-Entropy loss function for 3D output type networks with channel wise SoftMax output layer.
-
Added "Train?" control to “NN_Forward.vi” to take into account whether the network is in train state or not.
-
“NN_Calc_Confusion_Matrix.vi” is converted to polymorphic VI, which instance is chosen based on dataset provided at the input.
-
Optimized “NN_Draw_Rectangle.vi” for speed.
-
Increased Confusion Matrix table display precision from 3 to 4 digits.
-
Updated reference examples to make them compatible with latest changes.
-
Now DeepLTK supports CUDA v10.2 and CUDNN v7.5.x.
-
Configuration file format is updated to address feature changes.
-
Help file renamed to “DeepLTK_Help.chm”
-
Help file updated to represent recent changes.
​
Enhancements
-
Fixed a bug when MaxPool and AvgPool layers were incorrectly calculating output values on the edges.
-
Fixed a bug related to deployment licensing on RT targets.
-
Fixed a bug where receptive field calculation algorithm did not take into account dilation factor.
-
Corrected accuracy metrics calculation in “NN_Eval(In3D_OutBBox).vi”.
-
Fixed typos in API VI descriptions and control/indicators.
-
Fix incorrect receptive field calculation for networks containing upsampling layer/s.
-
Fixed incorrect texts in error messages
​
​
V4.0.0
Features
-
General performance improvements.
-
Added support for ShortCut (Residual) layer. Now ResNet architectures can be trained.
-
Added support for Concatenation layer.
-
Updated layer creation API to obtain layer's reference at creation.
-
Added API to calculate networks predictions over a dataset.
-
Added utility VI for Bounding Box format conversion.
-
Updated dataset data-type (cluster) to include file paths array of data samples.
-
Updated dataset data-type (cluster) to include labels as an array of strings.
-
Added possibility to set custom image dimensions (network's input resolution) when creating network topology from configuration file.
-
Added possibility to set custom mini batch size when creating network from configuration file.
-
Added utility VI to split large datasets into smaller portions (e.g. split training dataset into train and validation).
-
Added API to calculate and render confusion matrix based on networks predictions.
-
Added API to get detections over a batch of input samples.
-
Added API for mAP (mean Average Precision) evaluation for object detection tasks.
-
Added WarmUp feature into Learning Rate update policy.
-
Added API to get weights (values and references) from the layer.
-
Updated CUDA and CUDNN support to versions CUDA 10.1 an CUDNN 7.5.6.
-
Deprecated some configuration parameters in the layer creation API.
-
Updated examples to comply with the latest version of the toolkit.
-
Updated some API VI icons.
-
Changed data-flow wiring in SVG diagram for ShortCut, Concat layers and updated colors.
-
Deprecated Detection layer.
-
Speed up training and inference on GPU.
-
Added dependency requirements checking functionality during toolkit’s installation.
​
Enhancements
-
Fixed a bug preventing usage of more than 1 anchor boxes.
-
Fixed a bug caused "missing nng32.dll" in 32-bit version of LabVIEW.
-
Fixed a bug causing LabVIEW crash in LabVIEW 2017 and LabVIEW 2018.
-
Fixed bug causing LabVIEW crash when deploying networks with DropOut and/or DropOut3D layers.
-
Fixed bug rarely appearing when training network with LRelu activation for GPU.
-
Other bug fixes.
​​
V3.1.0
Features​
-
Added support for training with batch normalization.
-
Added utility to merge/fuse batch normalization into Conv3D or FC.
-
Added API to PULL/PUSH data from/to GPU.
-
Added utility to check GPU driver installation.
Enhancements
-
Fixed issue with asynchronous computations on GPU.
-
Fixed dataset's size element type representation in In3D_Out3D dataset control.
-
Added missing API for dataset datatypes in front panel's function controls.
-
Fixed help links in the API.
​
V3.0.1
Features​
-
Added support for training networks for object detection.
-
Added VIs for anchor box calculation based on annotated dataset.
-
Added VIs for calculating mAP (Mean Average Precision) for evaluating networks for object detection.
-
Added reference example for object detection.
-
Now when initializing weight number of first layers can be specified.
Suitable to transfer learning. -
Added API to Det/Get DVR values (Polymorphic VIs for 1D, 2D, 3D and 4D SGL Arrays).
-
Added new type of dataset memory for object detection.
-
Added UpSample Layer
-
Added support for online deployment license activation
-
Updated help file to reflect the changes
Enhancements
-
Fixed GPU memory leakage issue.
​
​
V2.0.1
Features​
-
1.Added support for acceleration on GPUs.
-
Added GPU related examples.
-
Restructured help document.
-
Added instructions for GPU toolkit installation.
-
Added description for new examples.
-
Updated GPU related API descriptions.
Enhancements
-
Bug fixes and performance improvements.
V1.3.3
Features​
-
Removed Augmentation layer.
-
Added augmentation functionality into Input3D layer.
-
Added training support for datasets with different dimensionality:
-
1-dimensional input -> 1-dimensional output
-
3-dimensional input -> 1-dimensional output
-
3-dimensional input -> 3-dimensional output
-
-
Added API for checking dataset compliance (i.e. input and output dimensions) with a built network
-
Conv3D now supports DropOut3D as the input layer.
-
MaxPool and AvgPool layers now support non square inputs.
-
Added global MaxPool and AvgPool functionality.
-
YOLO Example. Now detected bounding boxes are provided in more convenient way for further processing.
-
YOLO Example. Now custom labels can be provided to be shown on the display.
Enhancements
-
Performance Improvements.
-
Improved Error Handling at SoftMax Layer creation.
-
Fixed metrics calculation for FC layer. Now number of params includes number of biases as well.
-
Improved Error Handling for checking dataset compliance with the built network.
-
Fixed a bug when writing a Region layer into configuration file.
​
V1.2.0
Features
-
Added support for deployment on NI’s RT targets
-
Added API to get/set layer weights
-
Added API to get layer outputs/activations
-
Added API to get next layer
-
Optimized weight initialization process
-
Error Handling: Check input layer type at layer creation
-
Error Handling: Check input dimensions at creating Conv3D layer
-
Error Handling: Check input dimensions at creating Pool layer
-
Error Handling: Check input data dimensions when setting Input3D layer outputs
Enhancements
-
Fixed the bug for the case when neural network trained on non-square images
-
Fixed the bug in get_next_layer.vi
-
Added warning at get layer data if not proper type of layer of routed at the input
-
Error Handling: Check input dimensions at creating Conv3D layer
-
Fixed a bug get new minibatch for dataset indexing when set for random sampling.
-
Updated instructions in MNIST training example instructions​
​
V1.1.0
Features
-
Added new examples - MNIST_Classifier_CNN(Train).vi, MNIST_Classifier(Deploy).vi.
-
Added deployment licensing functionality.
​
Enhancements
-
Updated help file.
-
Fixed help file location in the installer.
-
Corrected toolkit Vis’ default values.
-
Fixed bug when creating some layers (SoftMax, DropOut, DropOut3D) from configuration file.
-
Fixed errors generated by NN_Destroy.vi when empty network is provided at the input.
-
Now probability set at creation of Dropout layers will be coerced into (0..1) and warning will be generated.
-
Fixed the issue of propagating warning through the toolkit VIs.
-
Other bug fixes.​
​
V1.0.3
-
Initial Release.
​
​