Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
Cell outputs can be very long. For example, if we run a query (gcloud, SQL, etc...) the output could be very verbose. This output could eat up the entire context allocated for the input document. As a result, we might not have sufficiently meaningful context to prompt the model.
There was another bug in our doc tailer. We were applying character limits to the rendered markdown. We were imposing this by tailing the lines. This could produce invalid markdown. For example, we might end up truncating the document in the middle of a code block so we wouldn't have the opening triple quotes for the code block. We might also include the output of the code block without including the code that it is output for.
Solution
First, we impose character limits in a way that is aware of cell boundaries. We move truncation into the Block to Markdown conversion. The conversion now takes the maximum length for the output string. The conversion routine then figures out how much to allocate to the contents of the cell and its outputs. This allows truncation to happen in a way that can respect cell boundaries.
Second, if we truncate the code block or output we output a string indicating that the output was truncated. We want the model to know that output was truncated. We update our prompt to tell the LLM to look for truncated output and to potentially deal with this by running commands that will provide less verbose output.
Fix #299