Which version of pytorch is available with nnstreamer-pytorch 2.1.0.0-0~202111300837~ubuntu18.04.1?
이병헌
Hi dev. I’m trying to use yolov5s tensorscript.pt file that exported from yolov5 GitHub code with nnstreamer pipeline. I’m working on develop environment with nnstreamer/tools/docker/ubuntu18.04-run/Dockerfile Below is my test pipeline. protocols=4 ! rtph265depay ! avdec_h265 ! \ videoscale ! videoconvert ! video/x-raw,format=RGB,width=640,height=640 ! \ tensor_converter ! tensor_filter framework=pytorch \ model=../../tf_model/yolov5s.torchscript.pt \ input=3:640:640:1 inputname=x inputtype=float32 \ output=1:25200:85 outputname=416 outputtype=float32 \ ! tensor_sink name=tensor_sink”; But with this pipeline, I got error as below failed to initialize the object: PyTorch 0:00:00.376511035 29 0x559a1ba3d4c0 WARN GST_PADS gstpad.c:1149:gst_pad_set_active:<tensorfilter0:sink> Failed to activate pad ** Message: 05:11:55.607: gpu = 0, accl = cpu ** (tester:29): CRITICAL **: 05:11:55.610: Exception while loading the model: aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor): Expected at most 12 arguments but found 13 positional arguments. : /opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py(442): _conv_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py(446): forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /usr/src/app/yolov5/models/common.py(49): forward_fuse /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /usr/src/app/yolov5/models/common.py(207): forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /usr/src/app/yolov5/models/yolo.py(149): _forward_once /usr/src/app/yolov5/models/yolo.py(126): forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(958): trace_module /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(741): trace export.py(71): export_torchscript export.py(372): run /opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py(28): decorate_context export.py(430): main export.py(435): <module> Serialized File "code/__torch__/torch/nn/modules/conv.py", line 12 bias = self.bias weight = self.weight input0 = torch._convolution(input, weight, bias, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True, True) ~~~~~~~~~~~~~~~~~~ <--- HERE return input0 ** (tester:29): CRITICAL **: 05:11:55.610: Failed to load model It looks like something that happens when the yolov5 pytorch version and nnstreamer runtime pytorch version are different. But I heard as pytorch is backward compatible type. Yolov5 pytorch version is 1.10.1 And with description on nnstreamer-pytorch in GitHub, It looks like It is built from pytorch version 1.3.1 After this point, I lost the way to go. If do you have any advise, tell me. Additionaly, In tensorfilter element, How can I get the property that name of “outputname” and “inputname” from pytorch model? When I convert pytorch model to onnx model, there have arg that I can specify. But I cannot find those arg when I export model as normal pt or tensorsciprt. regards. |
|||
|
|||
The actual tensor you've entered for tensor_filter is You need to transform uint8 stream to float32 stream with tensor_transform before you feed it to tensor_filter. nnstreamer-pytorch 2.1.0 is tested with pytorch 1.1, 1.3, and 1.6; but this is not related with version compatibility. |
|||
|
|||
Jihoon Lee
Hello, Apart from the pipeline description, backward-compatibility should mean that Pytorch 1.10.1 can load model built with pytorch 1.3.1 not vice versa. (eg) excel 2010 can open file saved with excel 2003 but not the other way around)
If problem persists, you might want to check if libtorch 1.3.1 is capable of loading yolov5s.torchscript.pt or use suitable libtorch version.
Bests, Ji
From: nnstreamer-technical-discuss@... <nnstreamer-technical-discuss@...> On Behalf Of 이병헌
Hi dev. I’m trying to use yolov5s tensorscript.pt file that exported from yolov5 GitHub code with nnstreamer pipeline. I’m working on develop environment with nnstreamer/tools/docker/ubuntu18.04-run/Dockerfile
Below is my test pipeline.
const char *string = "rtspsrc location=rtsp://address:port/mount latency=0 \ protocols=4 ! rtph265depay ! avdec_h265 ! \ videoscale ! videoconvert ! video/x-raw,format=RGB,width=640,height=640 ! \ tensor_converter ! tensor_filter framework=pytorch \ model=../../tf_model/yolov5s.torchscript.pt \ input=3:640:640:1 inputname=x inputtype=float32 \ output=1:25200:85 outputname=416 outputtype=float32 \ ! tensor_sink name=tensor_sink”;
But with this pipeline, I got error as below
failed to initialize the object: PyTorch 0:00:00.376511035 29 0x559a1ba3d4c0 WARN GST_PADS gstpad.c:1149:gst_pad_set_active:<tensorfilter0:sink> Failed to activate pad ** Message: 05:11:55.607: gpu = 0, accl = cpu
** (tester:29): CRITICAL **: 05:11:55.610: Exception while loading the model:
aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor): Expected at most 12 arguments but found 13 positional arguments. : /opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py(442): _conv_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py(446): forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /usr/src/app/yolov5/models/common.py(49): forward_fuse /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /usr/src/app/yolov5/models/common.py(207): forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /usr/src/app/yolov5/models/yolo.py(149): _forward_once /usr/src/app/yolov5/models/yolo.py(126): forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1090): _slow_forward /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py(1102): _call_impl /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(958): trace_module /opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py(741): trace export.py(71): export_torchscript export.py(372): run /opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py(28): decorate_context export.py(430): main export.py(435): <module> Serialized File "code/__torch__/torch/nn/modules/conv.py", line 12 bias = self.bias weight = self.weight input0 = torch._convolution(input, weight, bias, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True, True) ~~~~~~~~~~~~~~~~~~ <--- HERE return input0
** (tester:29): CRITICAL **: 05:11:55.610: Failed to load model
It looks like something that happens when the yolov5 pytorch version and nnstreamer runtime pytorch version are different. But I heard as pytorch is backward compatible type. Yolov5 pytorch version is 1.10.1 And with description on nnstreamer-pytorch in GitHub, It looks like It is built from pytorch version 1.3.1 After this point, I lost the way to go.
If do you have any advise, tell me.
Additionaly, In tensorfilter element, How can I get the property that name of “outputname” and “inputname” from pytorch model? When I convert pytorch model to onnx model, there have arg that I can specify. But I cannot find those arg when I export model as normal pt or tensorsciprt.
regards.
|
|||
|
|||
Ah.. as Jihoon said, if you have a model for PyTorch 1.10, you need to install and run PyTorch 1.10. Rebuild nnstreamer-pytorch with PyTorch 1.10 if your nnstreamer-pytorch installation doesn't work w/ PyTorch 1.10 In other words: Try 1: keep nnstreamer, nnstreamer-pytorch, install PyTorch 1.10 and let nnstreamer run w/ PyTorch 1.10 (need to make sure the version you use in run-time) : easiest. but not sure if this will work (if PyTorch has changed header files, this will fail) Try 2: keep nnstreamer, rebuild nnstreamer-pytorch based on PyTorch 1.10. (this should work. requires a bit of C building tricks.) Try 3: rebuild the whole nnstreamer w/ PyTorch. (this should work. doesn't require C building tricks.) |
|||
|
|||
이병헌
Thanks I don't know well about rebuiling procedure of nnstreamer, but I will try with github documentations. regards. |
|||
|
|||
It is general C library and nnstreamer uses the standard building procedure. You may refer to the nnstreamer documentation; however, please keep in mind that you can refer to the general Linux C library building procedure (meson+ninja). You may discuss building issues/questions in github-issues, too.
--------- Original Message ---------
Sender : 이병헌 <byungs286@...> Date : 2021-12-02 18:57 (GMT+9) Title : Re: [NNStreamer Technical Discuss] Which version of pytorch is available with nnstreamer-pytorch 2.1.0.0-0~202111300837~ubuntu18.04.1?
Thanks I don't know well about rebuiling procedure of nnstreamer, but I will try with github documentations. regards.
|
|||
|