Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
[Customization]Mediapipe hand pose inference with other tflite models #558
Comments
|
I am not sure this Mediapipe API to get image from packet can be work or not. |
|
Hi @momo1986,
What is the source of your images? Note that But I am unsure why you need a
You have a model that can run inference on RGB bitmap images, and it seems you want to convert camera frame images into RGB bitmap images. Did I understand that correctly? Note that Here is an idea you could try:
|
|
Hello, @eknight7 . Thanks for your proposal. I will try it. You are correct, my tf-lite model is on CPU. Maybe new graph is needed. Thanks for your suggestion. I wish you are all good. Regards! |
|
Hello, @momo1986 . Did you figure out this issue? I also face this question. Hello, @eknight7, I followed these steps you suggested.
But it threw this exception: native: E0727 09:14:42.380656 27956 graph.cc:407] ; Input Stream "input_video_cpu" for node with sorted index 1 does not have a corresponding output stream. Is there anything i need to do else? much appreciated your help. |

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

Hello, dear Mediapipe guys.
I want to inference the hand pose with Mediapipe model and my own model.
I have my own tf-lite models, it can work on the RGB bitmap.
I try to query the RGB bitmap from input frame with data packet.
My code is
It crashes during "AndroidPacketGetter.getBitmapFromRgb(packet);"
Here is the log.
Thus, is it possible for us to get the frame image from input data packet and prediction with some models not depend on Mediapipe's infrastructure.
Thanks & Regards!