If you’ve used Coda AI with any level of prompt complexity, you have probably encountered cases where you wonder what the inferencing performance would be without a certain part of the prompt or a slight modification.
Prompt engineering is much like writing code - you often test operational hypothesis by copying lines, changing them subtly, and commenting out the previous line. However clever this approach may be, I can save everyone the trouble of telling the LLM that it should ignore any lines in the prompt that begin with a comment indicator such as ‘#’ or '// '.
It doesn’t work.
@Codans - you know this one is coming and you need it yourselves, so get it done.