Re: NNstreamer How to register custom converter and decoder
Simply returning ENOENT from getModelInfo() will fail the gst cap negotiation and the pipeline won't start at all.
Enforcing flexible tensor (other/tensors,format=flexible) for both input and output of tensor_filter and supporting "setInputDimension(v0)" or "getModelInfo(v1)" with "(ops & SET_INPUT_INFO)" enabled from the tensor_filter subplugin should work. Is this correct, Jaeyun?
E.g., some_src ! other/tensors,format=flexible ! tensor_filter something ! other/tensors,format=flexible ! some_sink
In this pipeline, in general, it will work with changing dimensions from some_src, without changes in the current nnstreamer.
However, it appears that we do not have appropriate test cases for this: e.g., videotestsrc 640x480 ! tensor_converter ! other/tensors,format=flexible ! join name=j ! tensor_filter some filter ! other/tensors,format=flexible ! filesink testfile videotestsrc 200x200 ! tensor_converter ! other/tensors,format=flexible ! j. # see if the output is consistent with what we've intended.
Cheers, MyungJoo ps. I'm adding Jaeyun as he did the refactoring for supporting flexible tensors in tensor_filter. --------- Original Message ---------
Sender : Sascha Brueck <sascha.brueck@...> Date : 2022-10-08 00:52 (GMT+9) Title : RE: RE: NNstreamer How to register custom converter and decoder
Dear MyungJoo,
Another thought about tensor_filter:
How can we fix the output to be a flexible output tensor in this framework? The choice if it is flexible or not is made by the next downstream plugin and not inside the tensor_filter, because its capabilities are set to always allow static and flexible tensors. This is related to my last message where I asked if we can just always return ENOENT from the function getModelInfo, would that make the negotiation of a static tensor impossible and force a flexible tensor as output?
A different point: Is this the correct approach: The way to have a flexible output is:
Then the way to minimize reallocations is to have a memory inside the C++ object as a member that gets a capacity at initialization which should be able to hold a typical number of faces in a frame. In the flexible header and GstTensorMemory.size, I set the value to the size required for the actual number of faces. Only if this size is larger than the capacity I allocated at the beginning, I have to do a reallocation.
My plan is to implement a filter that does the bounding boxes including NMS for a face detector. I know similar things are implemented as a decoder but this is imo not an optimal choice because it does not allow the chaining. Better would be to have a filter doing that part and a decoder taking the input of the filter so that the two operations that now happen in the decoder are split between a filter and a simpler decoder and the filter can also be used in between two networks. Is something like this already implemented, considering it would serve as an input to the tensor_crop function that already exists?
Cheers Sascha
Dear Sascha,
It'd be really great for you to upstream OpenCV-tensor-transform! I'd recommend having that feature optional for OpenCV-less environments; i.e., isolate openCV parts w/ #if/#endif and turn it on/off with build configuration, preferably with "feature" (auto/enabled/disabled) of meson options.
With the serial multi-model inference case you've mentioned, it's ok to have a custom filter that prepares the 2nd models' input from the 1st model's output. Actually, that is the initial purpose of custom filters when we've designed nnstreamer for autonomous driving vehicles and robots.
If the output of that custom filter is "batch tensor" of #BATCH:a:b:c and the second model is supposed to run with a:b:c, you actually don't need to alter anything. You can simply slap in tensor_aggregator to generate #BATCH tensors of a:b:c from one tensor of #BATCH:a:b:c. If you choose this approach, the limit of number of tensors becomes irrelevant as well.
Cheers, MyungJoo.
--------- Original Message --------- Sender : Sascha Brueck <sascha.brueck@...> Date : 2022-10-06 00:34 (GMT+9) Title : RE: NNstreamer How to register custom converter and decoder
Dear MyungJoo,
Thanks for the answer. We will be happy to upstream extensions we make to nnstreamer also to gain more visibility for our company.
It is clear for me now how to extend the converter, decoder and filter elements. I would like to have a faster preprocessing element which uses OpenCV blobFromImage (as the current transpose is prohibitively expensive). But in my opinion the tensor_filter is not a good place for it, because its data structures are prepared to run models. Also, the convert and decode are not suited for the task because they convert from another source to tensor or from tensor to another source. So I extended the tensor_transform with a new option and have additional C++ functions with a C interface that are included into the tensor_transform. BTW, it is not necessary to do a copy when using OpenCV Mat because OpenCV can add its header to an already existing memory buffer.
I am thinking about how to implement a face detection and classification network. I do not think it is possible with the approach in the nnstreamer crop plugin and posted in this issue (Not much information on tensor_crop · Issue #3750 · nnstreamer/nnstreamer (github.com)). In my opinion, the way to go is to create another tensor_filter that sits between the face detection and the classification. This filter takes the output of the detector and creates a batch tensor for each frame. This batch tensor has a dynamic size because the batch size depends on the number of faces that has been detected in each frame. The batch tensor serves as an input to a tensor_filter that runs a classification network. Here, another adaptation would be necessary because, at least in the TVM framework, the input tensor size has to be static. So the TVM filter would have to be changed to run the inference in a loop and disaggregating the single images from the batch and feed those to the inference routine. I am considering implementing this in the coming days. But if you see an easier way to do it with available plugins, it would be great if you could let me know.
Cheers, Sascha
Dear Sascha,
For "subplugins" of tensor_converter and tensor_decoder, I recommend to look at other subplugin codes: - https://github.com/nnstreamer/nnstreamer/tree/main/ext/nnstreamer/tensor_converter - https://github.com/nnstreamer/nnstreamer/tree/main/ext/nnstreamer/tensor_decoder
For subplugins, you may write your own subplugins in a new source repository and make it depend on nnstreamer devel packages, or you may also write yours at ext/nnstreamer/tensor_{converter,decodder}/ and build it along with nnstreamer. The latter is recommended if you want to share your codes in nnstreamer (and keep it maintained in the upstream). The former is recommended if you want to keep your code in your organization without opening it.
Also, I'd recommend to write your questions or suggestions at https://github.com/nnstreamer/nnstreamer/issues so that more developers can reply.
It'd be great if you can share your nnstreamer pipelines or applications (being abstract is ok), in github.com or somewhere in public, too. To keep this project going in Samsung, I need more users and usage cases. :)
Cheers, MyungJoo
--------- Original Message --------- Sender : Sascha Brueck <sascha.brueck@...> Date : 2022-10-05 01:31 (GMT+9) Title : NNstreamer How to register custom converter and decoder
Dear MyungJoo
Thanks for providing the nnstreamer library. I am trying to extend the pipeline by writing my own tensor_converter (nnstreamer/gsttensor_converter.md at main · nnstreamer/nnstreamer (github.com)) and tensor_decoder (nnstreamer/gsttensor_decoder.md at main · nnstreamer/nnstreamer (github.com)) and providing my code via callback as described in those links.
I understand that I can write a cpp executable like in the example (nnstreamer-example/nnstreamer_example_object_detection_tflite_2cam.cc at main · nnstreamer/nnstreamer-example (github.com)) where I could just include the header for the callback, write my code with the given signature, register the code, call a pipeline inside the main function of my executable, and then deregister. I did not try this yet but this is my understanding of the example of the callback. However, what I want to do is to add my subplugin to nnstreamer and then call the pipeline via gst-launch. So for example I want to write on the command line the following: Gst-launch videotestsrc ! tensor_decoder mode=custom-code option1=tdec I do not understand how to do that. Is it possible? If yes, where do I register and unregister my function and into which folder do I have to put my function so that it gets compiled?
Best regards Sascha
|
|||||||||
|