Re: Which version of pytorch is available with nnstreamer-pytorch

Jihoon Lee


Apart from the pipeline description, backward-compatibility should mean that Pytorch 1.10.1 can load model built with pytorch 1.3.1 not vice versa.

(eg) excel 2010 can open file saved with excel 2003 but not the other way around)

If problem persists, you might want to check if libtorch 1.3.1 is capable of loading or use suitable libtorch version.





From: nnstreamer-technical-discuss@... <nnstreamer-technical-discuss@...> On Behalf Of 이병헌
Sent: Thursday, December 2, 2021 3:09 PM
To: nnstreamer-technical-discuss@...
Subject: [NNStreamer Technical Discuss] Which version of pytorch is available with nnstreamer-pytorch


Hi dev.

I’m trying to use yolov5s file that exported from yolov5 GitHub code with nnstreamer pipeline.

I’m working on develop environment with nnstreamer/tools/docker/ubuntu18.04-run/Dockerfile


Below is my test pipeline.


const char *string = "rtspsrc location=rtsp://address:port/mount latency=0 \

protocols=4 ! rtph265depay ! avdec_h265 ! \

videoscale ! videoconvert ! video/x-raw,format=RGB,width=640,height=640 ! \

tensor_converter ! tensor_filter framework=pytorch \

model=../../tf_model/ \

input=3:640:640:1 inputname=x inputtype=float32 \

output=1:25200:85 outputname=416 outputtype=float32 \

! tensor_sink name=tensor_sink”;


But with this pipeline, I got error as below


failed to initialize the object: PyTorch

0:00:00.376511035    29 0x559a1ba3d4c0 WARN                GST_PADS gstpad.c:1149:gst_pad_set_active:<tensorfilter0:sink> Failed to activate pad

** Message: 05:11:55.607: gpu = 0, accl = cpu


** (tester:29): CRITICAL **: 05:11:55.610: Exception while loading the model: 


aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor):

Expected at most 12 arguments but found 13 positional arguments.


/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _conv_forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _slow_forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _call_impl

/usr/src/app/yolov5/models/ forward_fuse

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _slow_forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _call_impl

/usr/src/app/yolov5/models/ forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _slow_forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _call_impl

/usr/src/app/yolov5/models/ _forward_once

/usr/src/app/yolov5/models/ forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _slow_forward

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/ _call_impl

/opt/conda/lib/python3.8/site-packages/torch/jit/ trace_module

/opt/conda/lib/python3.8/site-packages/torch/jit/ trace export_torchscript run

/opt/conda/lib/python3.8/site-packages/torch/autograd/ decorate_context main <module>

Serialized   File "code/__torch__/torch/nn/modules/", line 12

    bias = self.bias

    weight = self.weight

    input0 = torch._convolution(input, weight, bias, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True, True)

             ~~~~~~~~~~~~~~~~~~ <--- HERE

    return input0



** (tester:29): CRITICAL **: 05:11:55.610: Failed to load model


It looks like something that happens when the yolov5 pytorch version and nnstreamer runtime pytorch version are different.

But I heard as pytorch is backward compatible type. 

Yolov5 pytorch version is 1.10.1

And with description on nnstreamer-pytorch in GitHub, It looks like It is built from pytorch version 1.3.1

After this point, I lost the way to go.


If do you have any advise, tell me.


Additionaly, In tensorfilter element, How can I get the property that name of “outputname” and “inputname” from pytorch model?

When I convert pytorch model to onnx model, there have arg that I can specify.

But I cannot find those arg when I export model as normal pt or tensorsciprt.





Join { to automatically receive all group messages.