-
-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Insert context using @ commands #174
base: main
Are you sure you want to change the base?
Conversation
Finally got menus to at least show up.
Also applies minor fix to gp.logger to respect log levels when it comes to sending notifications.
So I merged this into my fork on this branch: there were minor changes needed due to the recent restructure of the main repo. Also, I removed the (```) code braces around the file context so that it would be more generally usable for other types of contexts needed in the conversation, because I'm using this feature to selectively include specific rule's files for my personal ttrpg. but I do think that feature would be useful as like an @codefile target, or something similar that does insert the code braces. |
Just to backup a thought (haven't looked over the implementation yet). I'll start neovim from dir A, make a chat with relative references and everything works. The same issue will arise for any future |
Hey, thanks the the feedback. I’m still actively working on this feature. I’ll rebase on main when it’s ready to be looked at, probably in the next few days. I’m not sure if that makes it easier or harder to merge into your personal beach though. :(
The @file command actually inserts both a relative file path and the file content. I thought the triple backtick fence would make it clear to the LLM where the file content starts and ends. Does the triple backtick fence actually confuse the LLM for your use case? Do you mind sharing a concrete example of the problem you ran into? If it really is a problem, I suppose, we can try something like @include to make it so it neither puts in the file path nor add the backticks. |
@Odie the situation I was thinking ahead to regarding the backtick fence was if the included file was itself a markdown file with included triple backtick fence posts. So if the template inserts them, it will invert all of them. instead, I think including the file path sets a clear demarcation for the LLM to understand that what follows is it's content, especially if newlines are used strategically, such as one newline between the filepath and the content and two or probably better three newlines between file contexts. The LLMs are pretty good at picking up patterns like that. I think two/three newlines before the actual message is also all that is necessary as well because it follows the same pattern. |
Checking for longest sequence of ` in the file and using N+1 as fence? # 4 backticks
```python
def main():
print("3 backticks")
```
# 4 backticks |
Hmm! Are both dir A and dir B inside the same git repo? I had meant for the relative oaths to work from the project git root. Though the current implementation doesn’t quite do it yet. (It’s maybe one or two lines modification to make it so.) So, this will work at some point soon in the future as I finish up the first pass of this feature and tries to clean up some loose ends. If what you’re describing is to actually carry the chat across to different repos… then, for sure it doesn’t work that way as implemented. At the moment, the requested contexts are parsed out of the last chat message (presumably from the user). The msg is augmented with the contexts right before it goes out on the wire. The current state of those requested contexts are never recorded anywhere. :( If we want the chat log to be a definitive record of what exactly was exchanged between the user and the LLM, I guess we’ll have to try to insert the text into the chat buffer instead? The chat log size might explode if the user is repeatedly iterating on a single file though. It’ll also eat up the available context window rather quickly. |
I think the files only need to be included where they are used. so if they are added early in the conversation then they should always be inserted early, it's all the same to the LLM I think, and then it keeps the entire conversation consistent as it progresses. also, it solves the issue of having it included in the actual chat file. although I would appreciate the option to not insert the text directly into the chat file, as one reason that I like the system as it is, is because it keeps a clean chat history visually where it's more like each insertion from the chat file's perspective is like a reference to shared knowledge. Like a header file in c++, it helps keep things clean and organized I plan on including tens of files in certain conversations to use with gpt-mini or claude haiku, basically selectively including different sets of what amounts to 100s of pages of rules for my ttrpg so that I can query for inconsistencies or new ideas. |
@Odie We have If we wanted to complicate things further, we could remember from which dir was the artifact made and when generating new response first try to recreate a fresh instance of the artifact and if that failed fall for the the old instance backed up in the artifacts. |
@Robitx I think an artifacts directory as a solution adds more complexity than is necessary, since it would have to be maintained that when x file is deleted, x artifact file is also deleted. For the time being I think it should take advantage of the makrdown yaml header section key\value pairs, and every conversation should include a 'cwd' key with the path inserted when it is created. then, all of the relative paths should use this value instead of the vim cwd to remain consistent. I think this approach accomplishes a few things:
ultimately, a header with even 20 or 40 key value pairs at the start of the conversation is prefereable to trackign down an artifact file, and scales relatively well for the time being because conversations typically outweigh the header in length, so scrolling is already greatly necessary. it may even be that for a given chat file x, there is some data which is preferable to maintain as artifact file x, and other data which is beneficial to be quickly visible. But at least I would argue that for 'cwd' it deserves to be visible. |
@qaptoR You're right, that's why I put a the One in there I don't know yet, which would be better. For online LLM chats this is non issue, since they have to store uploaded artifacts anyway, but we do have a choice. Just a thought to the solution using cwd header - I can easily imagine situations where single cwd won't be sufficient (for example user working on something across multiple repositories like a project using micro services, or just simply project + referencing some library). Instead of putting cwd in header use |
Hi all, The new Function name IndexingWhen the user opens the chat buffer via
|
We're also now indexing symbols of different types: function, classes, and class methods
The symbols table is also now defined through the sqlite.lua ORM syntax
We're also now discarding old src_files entries that no longer exist on disk
Hi all, I think I've added all the features I set out to implement. The latest commit is now depends on plenary to deal with gitignore. The dependencies required now looks like:
I'm actually only using 1 utility function from plenary. So, if there are any objections to this, I can always just keep a local copy of that function instead. |
Hi all! I merged Python supportSymbol indexing now grabs plain functions, class methods, and classes using treesitter. They should show up when using the @code command and correctly marked as their corresponding types. Synchronous indexingAsync support is left undone at the moment. I looked every so briefly into indexing asynchrously, but didn't pursue it further as indexing seem "fast enough" for the small projects I'm trying it with. I'd really like to start using the plugin for a bit and discover other perhaps more urgent problems before tackling async support. @qaptoR I've added an |
the cursor This simplifies sending the function under the cursor as the chat context.
@Odie there are two things I want to make clear 1) i appreciate all the work you've done on this, and 2) all the work involved with the sqlite db is incredible, and I plan on diving in to how it works because I want to write a plugin that mimics 'dataview' for obsidian, where it searches through a project and indexes data for searching that other plugins can then tap into. however, I do not forsee myself using the @code feature because gp.nvim already had the ability to select code and insert it into the conversation with a file path annotation, which i think is faster and easier for me to target. I also don't want to incur the cost of indexing (however small) at this time, and I just think your first initial implementation was so elegant and simple I'm just adapting it for myself. i'm also implementing an @import command, which targets a file that can have @file or @include (or even more @import commands). so it's a recursive feature that allows for writing single file with all the commonly used includes. though I'm still trying to solve the situation where there is infinite recursion, but I think it would be hard to get into that situation if the user writes their command files carefully to not create reference loops where a imports b and b imports a kind of thing. |
file/buffer the user is examining.
Just wanted to share this. it's the first time that I've used the import feature on a large set of large files. on the left of the image are the 'import files', top right is the 'import binder'. Then in bottom right is the final import command which references the binder. Then I query the entire context. Claude haiku says it is about 36K tokens of context, Chatgpt mini says 31K, and I can't figure out how to see that info on gemini. |
Hi there!
What this PR does
I’ve started implementing a feature in
continue.dev
that I think is quite useful. It allows users to use @ commands to automatically include additional context from their project. For example, users can use@file
to insert the entire contents of a file or@code
to insert a specific function by name.Here’s an example user message:
In this case, the contents of
lua/gp/completion.lua
would be prepended to the user message before being sent out.Command completion
The @ commands are assembled with assistance from a custom completion source for ease of use. To try this PR out, please add “hrsh7th/nvim-cmp” as a plugin dependency:
The completion source is automatically attached to the chat buffer, so no additional configuration is required.
TODOs
This feature is still a work in progress, but the @file command is now functional. Your feedback is welcome!