fix: Prevent O(n²) memory usage in command output #9693
+219
−66
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The bash tool and the inline command execution were accumulating command output using
output += chunk.toString(), which creates a new string for every chunk and copies all previous content.For commands producing large output (like meson in weston), this caused catastrophic O(n²) memory usage, as the GC had not time to catch up.
Large output was eventually written to disk and truncated in memory, but that was only done at the end, potentially leading to running out of memory.
Fix by streaming output to disk when it exceeds 50KB threshold:
Also adds line count tracking to match original Truncate.MAX_LINES behavior and proper cleanup on error/abort.
The session/prompt.ts InteractiveCommand uses the same pattern.
What does this PR do?
Avoids a pathological GC case caused by repeated string concatenation that was leading to the OOM killer getting engaged on my 16GB RAM laptop when opencode ran a large meson build with verbose output. I was hitting the issue with the bash tool, but the problem also happens on the interactive command codepath, so changed that as well.
The fix is to not accumulate the whole output as a big string through repeated concatenation with Truncate.output() handling it after the tool has finished, but to do our own handling of truncation. We only accumulate up to 50K in memory (this could be improved to be a sliding window of 50K if we prefer) and when we get anything beyond that stream the output to a file. The final outcome is the same as we had before, but without explosive memory usage.
How did you verify your code works?
I ran the tests, typechecked, and ran a build of opencode that includes the fix. opencode itself tried using the tool and succeeded, was also able to look at the whole file after. I did not observe the explosive memory usage I was seeing before, so I assume that is now fixed.
A good way to test this is to do !find / and click to expand the preview - be careful as that may cause your machine to grind to a halt - you can use something else that causes long output. With this PR applied you should see the same text at the end, but you should not see the memory usage explode.