The idea is to have the execution function of sync table be retried upon timing out, and receive an additional param to indicate that it timed out, so that it can reduce its execution time and recover.
The motivation is that in order to avoid timing out, sync table executions that take a long time should be authored to keep the execution time well below the deadline to avoid outlier cases. In some cases the execution time depends both on pack-controlled variables such as page size, but also on the nature of the user-driven data, which forces even more conservative bounding of the controlled variables. This can make the pack execution itself slower (more continuations & more network requests).
By adding a second chance when timing out, it is usually trivial to reduce the size of the workload for the execution (most packs would have the same levers in the form of page size, etc.), and allows for packs to better optimize the number of network requests made to speed up the time taken.
In practice this may not be feasible, as it would mean that for problematic packs which consistently go overtime, it would be adding an entire other 60s of compute time to each execution.