Sync table with the 60 s timeout


For a sync table, i get a dozen of item then for each item i make an additional request to fetch the image and get a temporary url with the temporaryBlobStorage property (see here).

Unfortunately, i face to an error because the sync formula timeout after 60 seconds :
Unhandled: {"errorMessage":"2022-01-xxx Task timed out after 60.06 seconds"}

I read into the doc that :

Continuations are not designed for persisting large amounts of data between executions

So is there a better approach than the continuation?
I have another use case with several item with their image, could it be consider as a large amount of data?

Thank you in advance for the thoughts.

In these cases we recommend that you re-fetch the full list of items at the start of each sync execution, and then keep track of where you are in that list using an index. You can see an example of that approach in the Spell sample. Because the fetcher caches responses, re-fetching the full list should be fast.

Thank you @Eric_Koleda for your answer.
In Dungeons and Dragons’ example, it uses a batch variable. But the variable doesn’t guarantee the respect of the 60s timeout. The server answer could change, couldn’t it?

Yes, that’s a good point. That batch size works with the speed of the API today, but may not if for example the API gets significantly slower. I’d recommend testing out various batch sizes and look for a value that complete well under 60 seconds, to leave room for fluctuations.

THank you @Eric_Koleda .

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.