admin管理员组

文章数量:1122846

I am working on an Android application where I want to integrate an object detection model with Augmented Reality (AR). My current setup is working fine with a normal CameraX implementation. However, I am struggling to adapt it to AR.

Here’s what I have:

•   Object Detection Model: A TensorFlow Lite (TFLite) model that provides bounding box values for each input frame (in pixel coordinates).
•   Output: The bounding box coordinates (top, bottom, left, right) and a confidence score for detected objects in the camera frame.

Problem:

Since the model output is frame-based (2D), I am unable to map the bounding boxes to real-world coordinates in AR. I need the bounding boxes to appear correctly positioned in the AR scene, aligned with the detected objects in the real world.

What I Have Tried:

•   Displaying bounding boxes on a CameraX preview — works perfectly.
•   Using Sceneform and ARCore to overlay 3D objects, but I can’t figure out how to map the bounding box from the model to AR coordinates.

My Questions:

1.  How can I map the 2D bounding box (frame coordinates) to 3D world coordinates in AR?
2.  Are there any libraries or tools that simplify this process?
3.  Should I use anchors or some other ARCore functionality to achieve this?

Additional Information:

•   Tools: ARCore, TensorFlow Lite, CameraX
•   Goal: To accurately display the detected object’s bounding boxes in the AR world, so they appear at the correct position relative to the real-world object.

Any guidance, code snippets, or examples would be greatly appreciated!

本文标签: How to Integrate Object Detection Model with Augmented Reality (AR) in AndroidStack Overflow