nnstreamer & openVINO
If you have installed prebuilt nnstreamer-openvino binary from nnstreamer PPA and installed openvino from somewhere else, our openvino subplugin (libnnstreamer_filter_openvino.so) may have openvino symbols that are not compatible with the openvino binary you have installed.
Suggestion: Approach 1: rebuilt nnstreamer-openvino with the openvino you have installed (you will need to add "devel" package) Approach 2: uninstall your openvino binary and install openvino from our PPA.
Am I correct that I should uninstall the package installed from the repository and use the source from github? Here I got stuck that cmake didn't find me an openVINO installation. I thought it was caused by another version of openVINO.
If you have installed prebuilt nnstreamer-openvino binary from nnstreamer PPA and installed openvino from somewhere else, our openvino subplugin (libnnstreamer_filter_openvino.so) may have openvino symbols that are not compatible with the openvino binary you have installed.
Suggestion: Approach 1: rebuilt nnstreamer-openvino with the openvino you have installed (you will need to add "devel" package) Approach 2: uninstall your openvino binary and install openvino from our PPA.
Yes, nnstreamer-openvino in Ubuntu PPA is built against openvino - 2019R3 and to use this instance of nnstreamer-openvino, you are recommended to unstaill your openvino and install openvino from nnstreamer PPA.
If cmake cannot find an openvino installation (probably you are trying "Approach 1"?), you may install openvino.pc (pkgconfig) to help build tools (cmake, meson/ninja) find them. For example, we have added openvino.pc for debian and rpm packaging at https://git.tizen.org/cgit/platform/upstream/dldt/ (/packaging/openvino.pc.in file is processed and packaged). You may edit the .pc.in file and install directly to /usr/lib/pkgconfig/
> so in other words shouldn't I use installation packages other than those from nnstreamer ppa (sudo add-apt-repository ppa: nnstreamer / ppa)?
You may still use packages from other PPAs or locally built libraries; you may need to rebuild nnstreamer subplugins (/ext/nnstreamer/*) accordingly. This is often due to the changes in interfaces of headers or the behaviors of APIs of libraries (e.g., openvino), which implies that they do not keep binary-wise backward-compatibility.
You need to keep using all related packages from nnstreamer/ppa ONLY IF you do not want to (potentially) rebuild anything.
> I would like to ask more generally - I'm self-learning in programming and I work in c. Why in the examples is just "example_image_classification_tflite" in c and not "object detection"? Is it based on the fact that c is not primarily object-oriented and c ++ is? When I still had an amiga (around 1996 - 2000), I studied BASIC and I had it fixed that even though it is not object-oriented, it is possible to cheat and use objects :)
1. Not providing "object detection" example in C has NOTHING to do with C being non object-oriented. You can still write object detection example in C; you just need to slap in an object detection neural network model instead of an image classification model along with proper post processors of your choice. It has totally no relation.
2. There are C examples of object detection. Look at https://github.com/nnstreamer/nnstrreamer-example / Tizen.native/ObjectDetection
Besides, even in the C++ examples of /native directory, if you look closely, you can see that being object-oriented or not has no effect on object detection itself.
3. Although the characteristics of programming language has totally NOTHING to do with nerual network models (object detection), if you want object-oriented-like behavior, you may look at GLIB and GOBJECT. (not GLIBC)
Cheers,
MyungJoo
Sender : Olda Šmíd <olda476@...>
Date : 2021-03-02 18:05 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Thank you, so in other words shouldn't I use installation packages other than those from nnstreamer ppa (sudo add-apt-repository ppa: nnstreamer / ppa)?
I would like to ask more generally - I'm self-learning in programming and I work in c. Why in the examples is just "example_image_classification_tflite" in c and not "object detection"? Is it based on the fact that c is not primarily object-oriented and c ++ is? When I still had an amiga (around 1996 - 2000), I studied BASIC and I had it fixed that even though it is not object-oriented, it is possible to cheat and use objects :)
út 2. 3. 2021 v 9:26 odesílatel MyungJoo Ham via lists.lfaidata.foundation <myungjoo.ham=samsung.com@...> napsal: Yes, nnstreamer-openvino in Ubuntu PPA is built against openvino - 2019R3 and to use this instance of nnstreamer-openvino, you are recommended to unstaill your openvino and install openvino from nnstreamer PPA. If cmake cannot find an openvino installation (probably you are trying "Approach 1"?), you may install openvino.pc (pkgconfig) to help build tools (cmake, meson/ninja) find them. For example, we have added openvino.pc for debian and rpm packaging at https://git.tizen.org/cgit/platform/upstream/dldt/ (/packaging/openvino.pc.in file is processed and packaged). You may edit the .pc.in file and install directly to /usr/lib/pkgconfig/ |
|
|
DetectedObject is a simple struct; thus you can keep using it in C.
std :: vector <DetectedObject> is a "dynamic array" of DetectedObject.
You may simply use
DetectedObject *objects
instead of
std::vector<DetectedObject> detected_objects
and do memory management yourself.
Sender : Olda Šmíd <olda476@...>
Date : 2021-03-04 02:18 (GMT+9)
Title : Re: Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
I would also like to ask if it is possible to get an example of detection in c? Thank you very much for your help Olda |
út 2. 3. 2021 v 17:29 odesílatel Olda Šmíd <olda476@...> napsal:
|
|
|
At the same time, there are simply not many programs in the examples in c. Both gstreamer has basic examples in c, but it's already widespread in c ++ or other languages.
I apologize for English - it's not my native language :)
DetectedObject is a simple struct; thus you can keep using it in C.
std :: vector <DetectedObject> is a "dynamic array" of DetectedObject.
You may simply use
DetectedObject *objects
instead of
std::vector<DetectedObject> detected_objects
and do memory management yourself.
--------- Original Message ---------
Sender : Olda Šmíd <olda476@...>
Date : 2021-03-04 02:18 (GMT+9)
Title : Re: Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
I would also like to ask if it is possible to get an example of detection in c?
If I understand correctly, the "error" between c and c ++ is:
std :: vector <DetectedObject> detected_objects;
The rest I would generally understand how to convert :)Thank you very much for your help
Olda
út 2. 3. 2021 v 17:29 odesílatel Olda Šmíd <olda476@...> napsal:
Thanks again for the advice. Now nnstreamer reports another error - no myriad plugin is initiated. So I badly installed the openvino plugin and that's no longer a problem :)
I would like to ask for an example in c. Thank you for the link, I don't really understand it, but hopefully it will be ok.I'm thinking of trying to rework the image classification example. (https://github.com/nnstreamer/nnstreamer-example/tree/main/native/example_image_classification_tflite). After all, c ++ goes beyond me and as You said, there is no need to use just c++ just because object oriented behavior.
Olda
út 2. 3. 2021 v 12:37 odesílatel MyungJoo Ham <myungjoo.ham@...> napsal:
> so in other words shouldn't I use installation packages other than those from nnstreamer ppa (sudo add-apt-repository ppa: nnstreamer / ppa)?
You may still use packages from other PPAs or locally built libraries; you may need to rebuild nnstreamer subplugins (/ext/nnstreamer/*) accordingly. This is often due to the changes in interfaces of headers or the behaviors of APIs of libraries (e.g., openvino), which implies that they do not keep binary-wise backward-compatibility.
You need to keep using all related packages from nnstreamer/ppa ONLY IF you do not want to (potentially) rebuild anything.
> I would like to ask more generally - I'm self-learning in programming and I work in c. Why in the examples is just "example_image_classification_tflite" in c and not "object detection"? Is it based on the fact that c is not primarily object-oriented and c ++ is? When I still had an amiga (around 1996 - 2000), I studied BASIC and I had it fixed that even though it is not object-oriented, it is possible to cheat and use objects :)
1. Not providing "object detection" example in C has NOTHING to do with C being non object-oriented. You can still write object detection example in C; you just need to slap in an object detection neural network model instead of an image classification model along with proper post processors of your choice. It has totally no relation.
2. There are C examples of object detection. Look at https://github.com/nnstreamer/nnstrreamer-example / Tizen.native/ObjectDetection
Besides, even in the C++ examples of /native directory, if you look closely, you can see that being object-oriented or not has no effect on object detection itself.
3. Although the characteristics of programming language has totally NOTHING to do with nerual network models (object detection), if you want object-oriented-like behavior, you may look at GLIB and GOBJECT. (not GLIBC)
Cheers,
MyungJoo
--------- Original Message ---------
Sender : Olda Šmíd <olda476@...>
Date : 2021-03-02 18:05 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Thank you, so in other words shouldn't I use installation packages other than those from nnstreamer ppa (sudo add-apt-repository ppa: nnstreamer / ppa)?
I would like to ask more generally - I'm self-learning in programming and I work in c. Why in the examples is just "example_image_classification_tflite" in c and not "object detection"? Is it based on the fact that c is not primarily object-oriented and c ++ is? When I still had an amiga (around 1996 - 2000), I studied BASIC and I had it fixed that even though it is not object-oriented, it is possible to cheat and use objects :)
út 2. 3. 2021 v 9:26 odesílatel MyungJoo Ham via lists.lfaidata.foundation <myungjoo.ham=samsung.com@...> napsal:
Yes, nnstreamer-openvino in Ubuntu PPA is built against openvino - 2019R3 and to use this instance of nnstreamer-openvino, you are recommended to unstaill your openvino and install openvino from nnstreamer PPA.
If cmake cannot find an openvino installation (probably you are trying "Approach 1"?), you may install openvino.pc (pkgconfig) to help build tools (cmake, meson/ninja) find them. For example, we have added openvino.pc for debian and rpm packaging at https://git.tizen.org/cgit/platform/upstream/dldt/ (/packaging/openvino.pc.in file is processed and packaged). You may edit the .pc.in file and install directly to /usr/lib/pkgconfig/
![]()
![]()
You're welcome.
It'd be great if you could directly send nnstreamer commits or share your examples via Github. You are also welcomed to discuss matters in Github Issues.
If you are going to use another library written in C++ or want to keep object oriented principles for potentially bigger projects, C++ is usually a better alternative. That's why a few "sub-plugins" of nnstreamer are written in C++ instead of C; e.g., /ext/nnstreamer/*/*.cc
I'd been a Linux kernel maintainer (, which usually means a C-advocate); however, in many cases the differences of run-time efficiency between C and C++ is either negligable or out-weighted by other software architectural decisions.
The example programs are just examples; they cannot cover all the possible usage cases. Thus, in theory, as long as there is at least one example application for a given programming languages, that's enough. :)
Anyway, don't worry about your English. It's not my native language, either.
Cheers,
MyungJoo
Sender : Olda Šmíd <Olda476@...>
Date : 2021-03-04 16:23 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Thank you very much. That is great help. I just can't understand why it's better to use c ++ instead of c. As I read your email, just rewrite a few lines and everything will be ok. I always thought that in terms of efficiency, it is best to write a program in the kernel language. When a program is in Python, I run it through another program (python3), which parses my program into c, and only then the action is performed. I still understand c ++ :) |
čt 4. 3. 2021 v 7:47 odesílatel MyungJoo Ham <myungjoo.ham@...> napsal:
|
|
|
It's very interesting - so it's relatively useless to convert an example to c - I just need to convert parse_launch to a dynamic pipeline -> use the syntax I know from c and connect elements via gst_bin_add_many and gst_link_many.
You're welcome.
It'd be great if you could directly send nnstreamer commits or share your examples via Github. You are also welcomed to discuss matters in Github Issues.
If you are going to use another library written in C++ or want to keep object oriented principles for potentially bigger projects, C++ is usually a better alternative. That's why a few "sub-plugins" of nnstreamer are written in C++ instead of C; e.g., /ext/nnstreamer/*/*.cc
I'd been a Linux kernel maintainer (, which usually means a C-advocate); however, in many cases the differences of run-time efficiency between C and C++ is either negligable or out-weighted by other software architectural decisions.
The example programs are just examples; they cannot cover all the possible usage cases. Thus, in theory, as long as there is at least one example application for a given programming languages, that's enough. :)
Anyway, don't worry about your English. It's not my native language, either.
Cheers,
MyungJoo
--------- Original Message ---------
Sender : Olda Šmíd <Olda476@...>
Date : 2021-03-04 16:23 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Thank you very much. That is great help.
I just can't understand why it's better to use c ++ instead of c. As I read your email, just rewrite a few lines and everything will be ok. I always thought that in terms of efficiency, it is best to write a program in the kernel language. When a program is in Python, I run it through another program (python3), which parses my program into c, and only then the action is performed. I still understand c ++ :)
At the same time, there are simply not many programs in the examples in c. Both gstreamer has basic examples in c, but it's already widespread in c ++ or other languages.
I apologize for English - it's not my native language :)
čt 4. 3. 2021 v 7:47 odesílatel MyungJoo Ham <myungjoo.ham@...> napsal:
DetectedObject is a simple struct; thus you can keep using it in C.
std :: vector <DetectedObject> is a "dynamic array" of DetectedObject.
You may simply use
DetectedObject *objects
instead of
std::vector<DetectedObject> detected_objects
and do memory management yourself.
--------- Original Message ---------
Sender : Olda Šmíd <olda476@...>
Date : 2021-03-04 02:18 (GMT+9)
Title : Re: Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
I would also like to ask if it is possible to get an example of detection in c?
If I understand correctly, the "error" between c and c ++ is:
std :: vector <DetectedObject> detected_objects;
The rest I would generally understand how to convert :)Thank you very much for your help
Olda
út 2. 3. 2021 v 17:29 odesílatel Olda Šmíd <olda476@...> napsal:
Thanks again for the advice. Now nnstreamer reports another error - no myriad plugin is initiated. So I badly installed the openvino plugin and that's no longer a problem :)
I would like to ask for an example in c. Thank you for the link, I don't really understand it, but hopefully it will be ok.I'm thinking of trying to rework the image classification example. (https://github.com/nnstreamer/nnstreamer-example/tree/main/native/example_image_classification_tflite). After all, c ++ goes beyond me and as You said, there is no need to use just c++ just because object oriented behavior.
Olda
út 2. 3. 2021 v 12:37 odesílatel MyungJoo Ham <myungjoo.ham@...> napsal:
> so in other words shouldn't I use installation packages other than those from nnstreamer ppa (sudo add-apt-repository ppa: nnstreamer / ppa)?
You may still use packages from other PPAs or locally built libraries; you may need to rebuild nnstreamer subplugins (/ext/nnstreamer/*) accordingly. This is often due to the changes in interfaces of headers or the behaviors of APIs of libraries (e.g., openvino), which implies that they do not keep binary-wise backward-compatibility.
You need to keep using all related packages from nnstreamer/ppa ONLY IF you do not want to (potentially) rebuild anything.
> I would like to ask more generally - I'm self-learning in programming and I work in c. Why in the examples is just "example_image_classification_tflite" in c and not "object detection"? Is it based on the fact that c is not primarily object-oriented and c ++ is? When I still had an amiga (around 1996 - 2000), I studied BASIC and I had it fixed that even though it is not object-oriented, it is possible to cheat and use objects :)
1. Not providing "object detection" example in C has NOTHING to do with C being non object-oriented. You can still write object detection example in C; you just need to slap in an object detection neural network model instead of an image classification model along with proper post processors of your choice. It has totally no relation.
2. There are C examples of object detection. Look at https://github.com/nnstreamer/nnstrreamer-example / Tizen.native/ObjectDetection
Besides, even in the C++ examples of /native directory, if you look closely, you can see that being object-oriented or not has no effect on object detection itself.
3. Although the characteristics of programming language has totally NOTHING to do with nerual network models (object detection), if you want object-oriented-like behavior, you may look at GLIB and GOBJECT. (not GLIBC)
Cheers,
MyungJoo
--------- Original Message ---------
Sender : Olda Šmíd <olda476@...>
Date : 2021-03-02 18:05 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Thank you, so in other words shouldn't I use installation packages other than those from nnstreamer ppa (sudo add-apt-repository ppa: nnstreamer / ppa)?
I would like to ask more generally - I'm self-learning in programming and I work in c. Why in the examples is just "example_image_classification_tflite" in c and not "object detection"? Is it based on the fact that c is not primarily object-oriented and c ++ is? When I still had an amiga (around 1996 - 2000), I studied BASIC and I had it fixed that even though it is not object-oriented, it is possible to cheat and use objects :)
út 2. 3. 2021 v 9:26 odesílatel MyungJoo Ham via lists.lfaidata.foundation <myungjoo.ham=samsung.com@...> napsal:
Yes, nnstreamer-openvino in Ubuntu PPA is built against openvino - 2019R3 and to use this instance of nnstreamer-openvino, you are recommended to unstaill your openvino and install openvino from nnstreamer PPA.
If cmake cannot find an openvino installation (probably you are trying "Approach 1"?), you may install openvino.pc (pkgconfig) to help build tools (cmake, meson/ninja) find them. For example, we have added openvino.pc for debian and rpm packaging at https://git.tizen.org/cgit/platform/upstream/dldt/ (/packaging/openvino.pc.in file is processed and packaged). You may edit the .pc.in file and install directly to /usr/lib/pkgconfig/
![]()
![]()
![]()
I would like to join github, I just don't know if I should publish on my account or join your :)
In Github, you are usually supposed to "fork" upstream (GitHub.com/nnstreamer/nnstreamer) to your personal repo (GitHub.com/YOURID/nnstreamer) and do the developments in your own personal repo. (I'm also following this developmental model).
You are allowed to discuss in the upstream GitHub issue (GitHub.com/nnstreamer/nnstreamer/issues) and "send" your code commits from your personal repo to the upstream when (so called "pull-request").
So.... you are supposed to publish in your account (personal repo) and participate at the upstream (GitHub.com/nnstreamer). You are not required to join any group for 1) forking nnstreamer, 2) doing development in your fork, 3) sending "pull requests" to upstream, 4) discussing in nnstreamer githubissues, 5) reviewing others' code commits, and so on.
It's very interesting - so it's relatively useless to convert an example to c - I just need to convert parse_launch to a dynamic pipeline -> use the syntax I know from c and connect elements via gst_bin_add_many and gst_link_many. As I wrote, I would like to use nnstreamer on raspberry, where the performance is insufficient. So I need every possible code acceleration. My friend and I have a video player on my raspberry, which displays any number of windows with an rtsp stream. RPI handles a maximum of 4 with a dynamic pipeline, with a maximum of two for parse_launch.
I don't think the runtime performance of pipelines would differ between GST-API implementation (what you are doing) and parse-launched implementation. If you look at gat-parse code, they actually call GST-API to generate pipelines. Once the pipeline is parsed and started, there should be no differences. The only overheads of parse-launch are parsing the string, which affects the start-up time, and the parser implementation should be efficient enough for RPI3 to execute within a few milliseconds. If there is performance problems with parse-launch, I'd investigate something else (the pipeline topology itself).
Cheers, MyungJoo
I use this pipeline:
"rtspsrc location = rtsp: //192.168.1.111: 8554 / test latency = 0! decodebin! videoconvert! videoscale!"
"video / x-raw, width =% d, height =% d, format = RGB! tee name = t_raw"
"t_raw.! queue! videoconvert! cairooverlay name = tensor_res! ximagesink name = img_tensor"
"t_raw.! queue leaky = 2 max-size-buffers = 2! videoscale! video / x-raw, width =% d, height =% d! tensor_converter!"
"tensor_transform mode = arithmetic option = typecast: float32, add: -127.5, div: 127.5!"
"tensor_filter framework = tensorflow-lite model =% s!"
"tensor_sink name = tensor_sink"
- I think I'm stupidly using videoconvert twice. I get h264 video from the camera, I understand that I have to convert to raw format, but I don't understand why I convert twice via videocale format.
- I find that glimagesink works very well with raspberry, but ximagesink is used in the example, so I left it.
- Do you think speed on rpi would help wayland? It's very imperfect software, but it came to me very fast.
I would like to join github, I just don't know if I should publish on my account or join your :)
In Github, you are usually supposed to "fork" upstream (GitHub.com/nnstreamer/nnstreamer) to your personal repo (GitHub.com/YOURID/nnstreamer) and do the developments in your own personal repo. (I'm also following this developmental model).
You are allowed to discuss in the upstream GitHub issue (GitHub.com/nnstreamer/nnstreamer/issues) and "send" your code commits from your personal repo to the upstream when (so called "pull-request").
So.... you are supposed to publish in your account (personal repo) and participate at the upstream (GitHub.com/nnstreamer). You are not required to join any group for 1) forking nnstreamer, 2) doing development in your fork, 3) sending "pull requests" to upstream, 4) discussing in nnstreamer githubissues, 5) reviewing others' code commits, and so on.
It's very interesting - so it's relatively useless to convert an example to c - I just need to convert parse_launch to a dynamic pipeline -> use the syntax I know from c and connect elements via gst_bin_add_many and gst_link_many. As I wrote, I would like to use nnstreamer on raspberry, where the performance is insufficient. So I need every possible code acceleration. My friend and I have a video player on my raspberry, which displays any number of windows with an rtsp stream. RPI handles a maximum of 4 with a dynamic pipeline, with a maximum of two for parse_launch.
I don't think the runtime performance of pipelines would differ between GST-API implementation (what you are doing) and parse-launched implementation. If you look at gat-parse code, they actually call GST-API to generate pipelines. Once the pipeline is parsed and started, there should be no differences. The only overheads of parse-launch are parsing the string, which affects the start-up time, and the parser implementation should be efficient enough for RPI3 to execute within a few milliseconds. If there is performance problems with parse-launch, I'd investigate something else (the pipeline topology itself).
Cheers, MyungJoo
- The reason why you need two videoconverts in your pipeline is because
the format required by cairooverlay+ximagesink (BGRx, BGRA, RGB16) is different from the format required by the neural network (RGB).
- Each imagesink element has different characteristics. You should choose image sink element according to your graphics/UI frameworks. (or try autovideosink). For more information for these GStreamer base/good/bad plugins, you will need to talk with GStreamer community; I don't think NNStreamer committers have good understandings of GStreamer's original plugins.
- If you are switching X with Wayland, yes, I suppose so if your main overheads are coming from UI stack, which is why Tizen has switched to Wayland from X. But, you should be careful: you need to first analyze if that's the main overhead (the bottleneck) and if your applications' *(functional) requirements may be met by Wayland.
Cheers,
MyungJoo
Sender : Olda Šmíd <Olda476@...>
Date : 2021-03-05 03:18 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Now I wonder where I made a mistake in the code of the rtsp player that there was an increase in performance :) What you are writing really makes sense. I just didn't understand that everyone uses parse_launch and doesn't get angry with the dynamic pipeline.
I use this pipeline:
"rtspsrc location = rtsp: //192.168.1.111: 8554 / test latency = 0! decodebin! videoconvert! videoscale!"
"video / x-raw, width =% d, height =% d, format = RGB! tee name = t_raw"
"t_raw.! queue! videoconvert! cairooverlay name = tensor_res! ximagesink name = img_tensor"
"t_raw.! queue leaky = 2 max-size-buffers = 2! videoscale! video / x-raw, width =% d, height =% d! tensor_converter!"
"tensor_transform mode = arithmetic option = typecast: float32, add: -127.5, div: 127.5!"
"tensor_filter framework = tensorflow-lite model =% s!"
"tensor_sink name = tensor_sink"
- I think I'm stupidly using videoconvert twice. I get h264 video from the camera, I understand that I have to convert to raw format, but I don't understand why I convert twice via videocale format.
- I find that glimagesink works very well with raspberry, but ximagesink is used in the example, so I left it.
- Do you think speed on rpi would help wayland? It's very imperfect software, but it came to me very fast.
čt 4. 3. 2021 v 14:17 odesílatel MyungJoo Ham via lists.lfaidata.foundation <myungjoo.ham=samsung.com@...> napsal: I would like to join github, I just don't know if I should publish on my account or join your :) In Github, you are usually supposed to "fork" upstream (GitHub.com/nnstreamer/nnstreamer) to your personal repo (GitHub.com/YOURID/nnstreamer) and do the developments in your own personal repo. (I'm also following this developmental model). You are allowed to discuss in the upstream GitHub issue (GitHub.com/nnstreamer/nnstreamer/issues) and "send" your code commits from your personal repo to the upstream when (so called "pull-request"). So.... you are supposed to publish in your account (personal repo) and participate at the upstream (GitHub.com/nnstreamer). You are not required to join any group for 1) forking nnstreamer, 2) doing development in your fork, 3) sending "pull requests" to upstream, 4) discussing in nnstreamer githubissues, 5) reviewing others' code commits, and so on. It's very interesting - so it's relatively useless to convert an example to c - I just need to convert parse_launch to a dynamic pipeline -> use the syntax I know from c and connect elements via gst_bin_add_many and gst_link_many. As I wrote, I would like to use nnstreamer on raspberry, where the performance is insufficient. So I need every possible code acceleration. My friend and I have a video player on my raspberry, which displays any number of windows with an rtsp stream. RPI handles a maximum of 4 with a dynamic pipeline, with a maximum of two for parse_launch. I don't think the runtime performance of pipelines would differ between GST-API implementation (what you are doing) and parse-launched implementation. If you look at gat-parse code, they actually call GST-API to generate pipelines. Once the pipeline is parsed and started, there should be no differences. The only overheads of parse-launch are parsing the string, which affects the start-up time, and the parser implementation should be efficient enough for RPI3 to execute within a few milliseconds. If there is performance problems with parse-launch, I'd investigate something else (the pipeline topology itself). Cheers, MyungJoo |
|
|
On my laptop the program runs fast, but on raspberry (ubuntu server) everything is slow. I get a speed of about one frame per second. It occurs to me that it is probably not reasonable to use tensorflow lite models. On the other hand, the question is whether I will help other models :)
- The reason why you need two videoconverts in your pipeline is because
the format required by cairooverlay+ximagesink (BGRx, BGRA, RGB16) is different from the format required by the neural network (RGB).
- Each imagesink element has different characteristics. You should choose image sink element according to your graphics/UI frameworks. (or try autovideosink). For more information for these GStreamer base/good/bad plugins, you will need to talk with GStreamer community; I don't think NNStreamer committers have good understandings of GStreamer's original plugins.
- If you are switching X with Wayland, yes, I suppose so if your main overheads are coming from UI stack, which is why Tizen has switched to Wayland from X. But, you should be careful: you need to first analyze if that's the main overhead (the bottleneck) and if your applications' *(functional) requirements may be met by Wayland.
Cheers,
MyungJoo
--------- Original Message ---------
Sender : Olda Šmíd <Olda476@...>
Date : 2021-03-05 03:18 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Now I wonder where I made a mistake in the code of the rtsp player that there was an increase in performance :) What you are writing really makes sense. I just didn't understand that everyone uses parse_launch and doesn't get angry with the dynamic pipeline.
I use this pipeline:
"rtspsrc location = rtsp: //192.168.1.111: 8554 / test latency = 0! decodebin! videoconvert! videoscale!"
"video / x-raw, width =% d, height =% d, format = RGB! tee name = t_raw"
"t_raw.! queue! videoconvert! cairooverlay name = tensor_res! ximagesink name = img_tensor"
"t_raw.! queue leaky = 2 max-size-buffers = 2! videoscale! video / x-raw, width =% d, height =% d! tensor_converter!"
"tensor_transform mode = arithmetic option = typecast: float32, add: -127.5, div: 127.5!"
"tensor_filter framework = tensorflow-lite model =% s!"
"tensor_sink name = tensor_sink"
- I think I'm stupidly using videoconvert twice. I get h264 video from the camera, I understand that I have to convert to raw format, but I don't understand why I convert twice via videocale format.
- I find that glimagesink works very well with raspberry, but ximagesink is used in the example, so I left it.
- Do you think speed on rpi would help wayland? It's very imperfect software, but it came to me very fast.
čt 4. 3. 2021 v 14:17 odesílatel MyungJoo Ham via lists.lfaidata.foundation <myungjoo.ham=samsung.com@...> napsal:
I would like to join github, I just don't know if I should publish on my account or join your :)
In Github, you are usually supposed to "fork" upstream (GitHub.com/nnstreamer/nnstreamer) to your personal repo (GitHub.com/YOURID/nnstreamer) and do the developments in your own personal repo. (I'm also following this developmental model).
You are allowed to discuss in the upstream GitHub issue (GitHub.com/nnstreamer/nnstreamer/issues) and "send" your code commits from your personal repo to the upstream when (so called "pull-request").
So.... you are supposed to publish in your account (personal repo) and participate at the upstream (GitHub.com/nnstreamer). You are not required to join any group for 1) forking nnstreamer, 2) doing development in your fork, 3) sending "pull requests" to upstream, 4) discussing in nnstreamer githubissues, 5) reviewing others' code commits, and so on.
It's very interesting - so it's relatively useless to convert an example to c - I just need to convert parse_launch to a dynamic pipeline -> use the syntax I know from c and connect elements via gst_bin_add_many and gst_link_many. As I wrote, I would like to use nnstreamer on raspberry, where the performance is insufficient. So I need every possible code acceleration. My friend and I have a video player on my raspberry, which displays any number of windows with an rtsp stream. RPI handles a maximum of 4 with a dynamic pipeline, with a maximum of two for parse_launch.
I don't think the runtime performance of pipelines would differ between GST-API implementation (what you are doing) and parse-launched implementation. If you look at gat-parse code, they actually call GST-API to generate pipelines. Once the pipeline is parsed and started, there should be no differences. The only overheads of parse-launch are parsing the string, which affects the start-up time, and the parser implementation should be efficient enough for RPI3 to execute within a few milliseconds. If there is performance problems with parse-launch, I'd investigate something else (the pipeline topology itself).
Cheers, MyungJoo
![]()
Hi, I wonder if it is possible to run openvinO support (USB stick NSC2) on nnstreamer?
On my laptop the program runs fast, but on raspberry (ubuntu server) everything is slow. I get a speed of about one frame per second. It occurs to me that it is probably not reasonable to use tensorflow lite models. On the other hand, the question is whether I will help other models :)pá 5. 3. 2021 v 3:10 odesílatel MyungJoo Ham <myungjoo.ham@...> napsal:- The reason why you need two videoconverts in your pipeline is because
the format required by cairooverlay+ximagesink (BGRx, BGRA, RGB16) is different from the format required by the neural network (RGB).
- Each imagesink element has different characteristics. You should choose image sink element according to your graphics/UI frameworks. (or try autovideosink). For more information for these GStreamer base/good/bad plugins, you will need to talk with GStreamer community; I don't think NNStreamer committers have good understandings of GStreamer's original plugins.
- If you are switching X with Wayland, yes, I suppose so if your main overheads are coming from UI stack, which is why Tizen has switched to Wayland from X. But, you should be careful: you need to first analyze if that's the main overhead (the bottleneck) and if your applications' *(functional) requirements may be met by Wayland.
Cheers,
MyungJoo
--------- Original Message ---------
Sender : Olda Šmíd <Olda476@...>
Date : 2021-03-05 03:18 (GMT+9)
Title : Re: [NNStreamer Technical Discuss] nnstreamer & openVINO
Now I wonder where I made a mistake in the code of the rtsp player that there was an increase in performance :) What you are writing really makes sense. I just didn't understand that everyone uses parse_launch and doesn't get angry with the dynamic pipeline.
I use this pipeline:
"rtspsrc location = rtsp: //192.168.1.111: 8554 / test latency = 0! decodebin! videoconvert! videoscale!"
"video / x-raw, width =% d, height =% d, format = RGB! tee name = t_raw"
"t_raw.! queue! videoconvert! cairooverlay name = tensor_res! ximagesink name = img_tensor"
"t_raw.! queue leaky = 2 max-size-buffers = 2! videoscale! video / x-raw, width =% d, height =% d! tensor_converter!"
"tensor_transform mode = arithmetic option = typecast: float32, add: -127.5, div: 127.5!"
"tensor_filter framework = tensorflow-lite model =% s!"
"tensor_sink name = tensor_sink"
- I think I'm stupidly using videoconvert twice. I get h264 video from the camera, I understand that I have to convert to raw format, but I don't understand why I convert twice via videocale format.
- I find that glimagesink works very well with raspberry, but ximagesink is used in the example, so I left it.
- Do you think speed on rpi would help wayland? It's very imperfect software, but it came to me very fast.
čt 4. 3. 2021 v 14:17 odesílatel MyungJoo Ham via lists.lfaidata.foundation <myungjoo.ham=samsung.com@...> napsal:
I would like to join github, I just don't know if I should publish on my account or join your :)
In Github, you are usually supposed to "fork" upstream (GitHub.com/nnstreamer/nnstreamer) to your personal repo (GitHub.com/YOURID/nnstreamer) and do the developments in your own personal repo. (I'm also following this developmental model).
You are allowed to discuss in the upstream GitHub issue (GitHub.com/nnstreamer/nnstreamer/issues) and "send" your code commits from your personal repo to the upstream when (so called "pull-request").
So.... you are supposed to publish in your account (personal repo) and participate at the upstream (GitHub.com/nnstreamer). You are not required to join any group for 1) forking nnstreamer, 2) doing development in your fork, 3) sending "pull requests" to upstream, 4) discussing in nnstreamer githubissues, 5) reviewing others' code commits, and so on.
It's very interesting - so it's relatively useless to convert an example to c - I just need to convert parse_launch to a dynamic pipeline -> use the syntax I know from c and connect elements via gst_bin_add_many and gst_link_many. As I wrote, I would like to use nnstreamer on raspberry, where the performance is insufficient. So I need every possible code acceleration. My friend and I have a video player on my raspberry, which displays any number of windows with an rtsp stream. RPI handles a maximum of 4 with a dynamic pipeline, with a maximum of two for parse_launch.
I don't think the runtime performance of pipelines would differ between GST-API implementation (what you are doing) and parse-launched implementation. If you look at gat-parse code, they actually call GST-API to generate pipelines. Once the pipeline is parsed and started, there should be no differences. The only overheads of parse-launch are parsing the string, which affects the start-up time, and the parser implementation should be efficient enough for RPI3 to execute within a few milliseconds. If there is performance problems with parse-launch, I'd investigate something else (the pipeline topology itself).
Cheers, MyungJoo
![]()