utensor_cgen
: C/C++ code generator for uTensor¶
Version: v1.0.0+dirty
Documentation update: Feb 09, 2022
Read Source Code on GitHub
Installation (Python 2 & 3)¶
For Developers¶
- with pipenv
# install `utensor_cgen` (develop mode)
$ PIPENV_VENV_IN_PROJECT=1 pipenv install -d
# spawn a subshell and activate virtualenv
$ pipenv shell
# get help message of `utensor-cli`
$ utensor-cli -h
Troubleshooting with pipenv¶
If you have troubles with installation using pipenv, try
$ PIPENV_VENV_IN_PROJECT=1 pipenv install -d --skip-lock
there is known issue of pip and pipenv, plz refer to this issue for detail
- short answer: downgrade to
pip==18.0
may help :)
- short answer: downgrade to
Tensorflow requires
setuptools<=39.1.0
(the latest is40.4.3
by the time this README is writen)- plz downgrade to
setuptools==39.1.0
- my recommendation is to use
virtualenv
- plz downgrade to
Overall Architecture¶
Basic Usage¶
Model File Inspection¶
$ utensor-cli show <model.pb>
Show all nodes and detailed information of given pb file or
a uTensorGraph
pickle file
Run utensor-cli show --help
for detailed information.
Convert Model File to C/C++ Code¶
$ utensor-cli convert <model.pb> \
--output-nodes=<node name>[,<node name>,...] \
[--config=config.toml]
Convert given pb file into cpp/hpp files.
Note that --output-nodes
is required options. It’s the names of
nodes you want to output, seperated by comma for multiple values.
In graph theory terminology, they are leaf
nodes of your graph.
Use --config
to pass a configuration file to the cli, you can use generate-config
command to generate one (see below).
example¶
$ utensor-cli convert simple_model.pb --output-nodes=pred,logits
Run utensor-cli convert --help
for detailed information.
Configuration¶
utensor-cli
use toml
as configuration format.
You can generate configuration file of given target as following:
$ utensor-cli generate-config --target <target name> [-o filename.toml]
This command will generate a toml
file listing all configurable values with its defaults.
You can modify the value and pass the file to cli with --config
flag.
example¶
# generate config file
$ utensor-cli generate-config --target utensor -o myconfig.toml
# after editting myconfig.toml
$ utensor-cli convert mymodel.pb --config=myconfig.toml --output-nodes=output,...
Use utensor_cgen
as Library¶
Subgraph Isomorphic Matcher¶
With uTensorGraphMatcher
, performing isomorphic subgraph matching
along with replacing or manipulating the matched subgraph(s) takes just a
few line of code:
from utensor_cgen.matcher import uTensorGraphMatcher
# `pattrn_ugraph` is the pattern to match with
pattrn_ugraph = ...
matcher = uTensorGraphMatcher(pattrn_ugraph)
# a larget graph to perform subgraph match
subject_ugraph = ...
# matches is a list of `uTensorGraphMatch` objects
matches = matcher.match_all(subject_ugraph)
if matches:
# do stuff with the matches
Use Case: Node Fusion¶
Note: we’ll use operation/node/layer interchangeably in the documentation
- It’s commonly seen pattern in convolution neural network (
CNN
),conv -> relu -> pooling
. That is, a 2D convolution followed by a relu layer and then a pooling down sampling layer. - With our
uTensorGraphMatcher
, you can locate such pattern in yourCNN
model and fuse/replace matched nodes into one optimizedQuantizedFusedConv2DMaxpool
node.
- Left: original graph
- Middle: matched convolution layer
- Right: replace the matched layer with specialized
QuantizedFusedConv2DMaxpool
node
Use Case: Dropout Layer Removal¶
- Though
dropout
is an effective technique to improve training performance of your model, it’s not necessary during inference phrase. - In the mainstream frameworks such as Tensorflow or PyTorch,
an
dropout
layer is typically implemented with other elementary operations/nodes. As a result, finding and removing those nodes for inference optimization (both in model size and prediciton time) is not trivial and error prone. - With our
uTensorGraphMatcher
, you can find and remove the dropout nodes as illustrated in the following picture.- Left: original graph with dropout Layers
- Middle: matched dropout layers
- Right: graph with dropout layers removed
We use mainly Tensorflow for declaring the pattern graph for matcher now.
High-level graph builder is on its way, see Future Works for detail.
Offline Tensor Memory Allocation¶
Considering following simple multi layers perceptron (simple_mnist.pb):
Once enabled the optimization transformer, tensor_alloc
, an offline tensor memory allocation planner,
utensor-cli
will generate uTensor
runtime codes that use following optimized allocation plan:
- y-axis: tensor names ordered by topological sorting
- x-axis: these are the memory span occupied by each tensor, that is, the memory address offset and
the size of the tensor
How to Serve Your Model on uTenosr¶
TensorFlow¶
- Freeze your tensorflow.Graph
- please refer to this issue track for detail
- especially this comment by Robin2091
- Follow instructions in Installation (Python 2 & 3) section to install
utensor_cgen
- then utensor-cli should be available in your console
- Inspect your pb file to find the output node
# verbose mode $ utensor-cli show graph.pb # or oneline mode $ utensor-cli show graph.pb --oneline
- convert the protobuf file to C/C++ source code with utensor-cli
- supose the output node is
pred
in graph.pb$ utensor-cli convert --output-nodes=pred graph.pb
- Compile your application code with generated C/C++ and weights files
- You should find your model C/C++ and weights files in directories models and constants respectively
Testing¶
- follow the steps in For Developers section
- run tests as following
# run with `make` $ make tests # run with `pipenv` $ pipenv run pytest tests
Future Works¶
- High-level graph builder api for building
uTensorGraph
.- Currently
utensor_cgen
usesTensorFlow
api for building IR graph,uTensorGraph
. - With high-level graph builder, users can build their
uTensorGraph
easily and do not need to take care of the integrity of the graph. The builder will take care of it automatically.
- Currently