admin管理员组文章数量:1392002
I've been using gemini for the summarization of multi-page pdfs and using the chunking technique, but as the parallel processing is being managed through the code, it's a bit slower. I believe Async API will solve my problem, but I couldn't find it in the documentation.
I've been using gemini for the summarization of multi-page pdfs and using the chunking technique, but as the parallel processing is being managed through the code, it's a bit slower. I believe Async API will solve my problem, but I couldn't find it in the documentation.
Share Improve this question edited Mar 27 at 13:34 VLAZ 29.1k9 gold badges63 silver badges84 bronze badges asked Mar 12 at 9:58 Abhishek PandaAbhishek Panda 314 bronze badges 2- 1 Hi @Abhishek Panda, Does this link1 and link2 helps you? – PUTHINEEDI RAMOJI RAO Commented Mar 12 at 12:23
- Hello @PUTHINEEDIRAMOJIRAO, These 2 links are using the Python async, and I'm using that for now, but I wanted to know if there's any Gemini native async API just like OpenAI or Antrhopic has, where the workload would be on the Gemini server instead on the on-premise. Thanks for the help though. – Abhishek Panda Commented Mar 17 at 8:58
1 Answer
Reset to default 1Yes, Gemini GenerativeModel has a native async API implementation - generate_content_async. You can easily wrap it under async client framework such as Python asyncio.
This blog from Paul Balm went in detail with sample code on how to prompting Gemini async natively.
本文标签: google cloud platformDoes gemini provide async prompting like anthropic and openaiStack Overflow
版权声明:本文标题:google cloud platform - Does gemini provide async prompting like anthropic and openai? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744760298a2623714.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论