admin管理员组

文章数量:1122846

I am working with a DART-MX8M-PLUS and I want to run a TF-lite model on the VX Delegate. The model runs fine, but I encountered an issue that is quite peculiar.

I am interfacing via python with a proprietary program that has some time constraints in the run cycle and initialization functions (i.e., they can't be too long). The issue is that the model that I'am running on the VX Delegate require 5s to load the first time, and therefore the initialization of the system goes into timeout.

My question is: is there a way to preload the model on the delegate with a script, and run the inference with another?

I didn't find any related documentation, but I am curious if there is a workaround to the problem.

本文标签: