In my view, the fix is not a longer timeout period, but simply support for streaming responses.
Coda’s AI implementation is relatively closed at the moment, so there’s no way to shape the underlying parameters or use different LLMs as I mention in this article.
The only remedy (today) is to build your own implementation in a Pack or use the OpenAI Pack.