admin管理员组文章数量:1122846
I am working on an Android application where I want to integrate an object detection model with Augmented Reality (AR). My current setup is working fine with a normal CameraX implementation. However, I am struggling to adapt it to AR.
Here’s what I have:
• Object Detection Model: A TensorFlow Lite (TFLite) model that provides bounding box values for each input frame (in pixel coordinates).
• Output: The bounding box coordinates (top, bottom, left, right) and a confidence score for detected objects in the camera frame.
Problem:
Since the model output is frame-based (2D), I am unable to map the bounding boxes to real-world coordinates in AR. I need the bounding boxes to appear correctly positioned in the AR scene, aligned with the detected objects in the real world.
What I Have Tried:
• Displaying bounding boxes on a CameraX preview — works perfectly.
• Using Sceneform and ARCore to overlay 3D objects, but I can’t figure out how to map the bounding box from the model to AR coordinates.
My Questions:
1. How can I map the 2D bounding box (frame coordinates) to 3D world coordinates in AR?
2. Are there any libraries or tools that simplify this process?
3. Should I use anchors or some other ARCore functionality to achieve this?
Additional Information:
• Tools: ARCore, TensorFlow Lite, CameraX
• Goal: To accurately display the detected object’s bounding boxes in the AR world, so they appear at the correct position relative to the real-world object.
Any guidance, code snippets, or examples would be greatly appreciated!
本文标签: How to Integrate Object Detection Model with Augmented Reality (AR) in AndroidStack Overflow
版权声明:本文标题:How to Integrate Object Detection Model with Augmented Reality (AR) in Android? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736283584a1927015.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论