Skip to content

Commit

Permalink
Merge branch 'main' into nate/sys-prompt-file
Browse files Browse the repository at this point in the history
  • Loading branch information
sestinj authored Nov 14, 2024
2 parents 1e98e36 + bb4f915 commit 689f3f0
Show file tree
Hide file tree
Showing 138 changed files with 5,754 additions and 392 deletions.
11 changes: 4 additions & 7 deletions core/autocomplete/CompletionProvider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -261,13 +261,10 @@ export class CompletionProvider {

//////////

// Do some stuff later so as not to block return. Latency matters
const completionToCache = outcome.completion;
setTimeout(async () => {
if (!outcome.cacheHit) {
(await this.autocompleteCache).put(outcome.prefix, completionToCache);
}
}, 100);
// Save to cache
if (!outcome.cacheHit) {
(await this.autocompleteCache).put(outcome.prefix, outcome.completion);

Check warning on line 266 in core/autocomplete/CompletionProvider.ts

View workflow job for this annotation

GitHub Actions / tsc-check

Promises must be awaited, end with a call to .catch, end with a call to .then with a rejection handler or be explicitly marked as ignored with the `void` operator

Check warning on line 266 in core/autocomplete/CompletionProvider.ts

View workflow job for this annotation

GitHub Actions / tsc-check

Promises must be awaited, end with a call to .catch, end with a call to .then with a rejection handler or be explicitly marked as ignored with the `void` operator
}

// When using the JetBrains extension, Mark as displayed
const ideType = (await this.ide.getIdeInfo()).ideType;
Expand Down
22 changes: 10 additions & 12 deletions core/autocomplete/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Continue now provides support for tab autocomplete in [VS Code](https://marketpl
We recommend setting up tab-autocomplete with a local Ollama instance. To do this, first download the latest version of Ollama from [here](https://ollama.ai). Then, run the following command to download our recommended model:

```bash
ollama run starcoder:3b
ollama run qwen2.5-coder:1.5b
```

Once it has been downloaded, you should begin to see completions in VS Code.
Expand All @@ -17,9 +17,9 @@ Once it has been downloaded, you should begin to see completions in VS Code.
You can also set up tab-autocomplete with a local LM Studio instance by following these steps:

1. Download the latest version of LM Studio from [here](https://lmstudio.ai/)
2. Download a model (e.g. search for `second-state/StarCoder2-3B-GGUF` and choose one of the options there)
2. Download a model (e.g. search for `Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF` and choose one of the options there)
3. Go to the server section (button is on the left), select your model from the dropdown at the top, and click "Start Server"
4. Go to the "My Models" section (button is on the left), find your selected model, and copy the name the path (example: `second-state/StarCoder2-3B-GGUF/starcoder2-3b-Q8_0.gguf`); this will be used as the "model" attribute in Continue
4. Go to the "My Models" section (button is on the left), find your selected model, and copy the name the path (example: `Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF/qwen2.5-coder-1.5b-instruct-q4_k_m.gguf`); this will be used as the "model" attribute in Continue
5. Go to Continue and modify the configurations for a [custom model](#setting-up-a-custom-model)
6. Set the "provider" to `lmstudio` and the "model" to the path copied earlier

Expand All @@ -28,8 +28,8 @@ Example:
```json title="config.json"
{
"tabAutocompleteModel": {
"title": "Starcoder2 3b",
"model": "second-state/StarCoder2-3B-GGUF/starcoder2-3b-Q8_0.gguf",
"title": "Qwen2.5-Coder 1.5b",
"model": "Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF",
"provider": "lmstudio",
},
...
Expand Down Expand Up @@ -69,11 +69,9 @@ If you aren't yet familiar with the available options, you can learn more in our

### What model should I use?

If you are running the model locally, we recommend `starcoder:3b`.
If you are running the model locally, we recommend `qwen2.5-coder:1.5b`.

If you find it to be too slow, you should try `deepseek-coder:1.3b-base`.

If you have a bit more compute, or are running a model in the cloud, you can upgrade to `deepseek-coder:6.7b-base`.
If you have a bit more compute, or are running a model in the cloud, you can upgrade to `qwen2.5-coder:7b`.

Regardless of what you are willing to spend, we do not recommend using GPT or Claude for autocomplete. Learn why [below](#i-want-better-completions-should-i-use-gpt-4).

Expand All @@ -83,7 +81,7 @@ The following can be configured in `config.json`:

### `tabAutocompleteModel`

This is just another object like the ones in the `"models"` array of `config.json`. You can choose and configure any model you would like, but we strongly suggest using a small model made for tab-autocomplete, such as `deepseek-1b`, `starcoder-1b`, or `starcoder-3b`.
This is just another object like the ones in the `"models"` array of `config.json`. You can choose and configure any model you would like, but we strongly suggest using a small model made for tab-autocomplete, such as `deepseek-1b`, `qwen2.5-coder:1.5b`, or `starcoder-3b`.

### `tabAutocompleteOptions`

Expand All @@ -105,7 +103,7 @@ This object allows you to customize the behavior of tab-autocomplete. The availa
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "starcoder:3b",
"model": "qwen2.5-coder:1.5b",
"apiBase": "https://<my endpoint>"
},
"tabAutocompleteOptions": {
Expand All @@ -128,7 +126,7 @@ Follow these steps to ensure that everything is set up correctly:

1. Make sure you have the "Enable Tab Autocomplete" setting checked (in VS Code, you can toggle by clicking the "Continue" button in the status bar).
2. Make sure you have downloaded Ollama.
3. Run `ollama run starcoder:3b` to verify that the model is downloaded.
3. Run `ollama run qwen2.5-coder:1.5b` to verify that the model is downloaded.
4. Make sure that any other completion providers are disabled (e.g. Copilot), as they may interfere.
5. Make sure that you aren't also using another Ollama model for chat. This will cause Ollama to constantly load and unload the models from memory, resulting in slow responses (or none at all) for both.
6. Check the output of the logs to find any potential errors (cmd/ctrl+shift+p -> "Toggle Developer Tools" -> "Console" tab in VS Code, ~/.continue/logs/core.log in JetBrains).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,21 @@ import { AstPath } from "../../util/ast";
import { ImportDefinitionsService } from "../ImportDefinitionsService";
import { AutocompleteSnippet } from "../ranking";

function getSyntaxTreeString(
node: Parser.SyntaxNode,
indent: string = "",
): string {
let result = "";
const nodeInfo = `${node.type} [${node.startPosition.row}:${node.startPosition.column} - ${node.endPosition.row}:${node.endPosition.column}]`;
result += `${indent}${nodeInfo}\n`;

for (const child of node.children) {
result += getSyntaxTreeString(child, indent + " ");
}

return result;
}

export class RootPathContextService {
private cache = new LRUCache<string, AutocompleteSnippet[]>({
max: 100,
Expand All @@ -26,8 +41,11 @@ export class RootPathContextService {
private static TYPES_TO_USE = new Set([
"program",
"function_declaration",
"function_definition",
"method_definition",
"method_declaration",
"class_declaration",
"class_definition",
]);

/**
Expand Down Expand Up @@ -56,12 +74,17 @@ export class RootPathContextService {
this.importDefinitionsService.get(filepath);
break;
default:
// const type = node.type;
// debugger;
// console.log(getSyntaxTreeString(node));

query = await getQueryForFile(
filepath,
`root-path-context-queries/${node.type}`,
);
break;
}
const type = node.type;

if (!query) {
return snippets;
Expand Down Expand Up @@ -97,6 +120,7 @@ export class RootPathContextService {
contents: await this.ide.readRangeInFile(def.filepath, def.range),
})),
);

return newSnippets;
}

Expand All @@ -112,6 +136,8 @@ export class RootPathContextService {
RootPathContextService.TYPES_TO_USE.has(node.type),
)) {
const key = RootPathContextService.keyFromNode(parentKey, astNode);
// const type = astNode.type;
// debugger;

const foundInCache = this.cache.get(key);
const newSnippets =
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,41 +2,85 @@ import { testRootPathContext } from "./testUtils";

const TEST_CASES = [
{
description: "function",
nodeType: "function_declaration",
fileName: "file1.ts",
range: {
start: { line: 10, character: 2 },
end: { line: 10, character: 24 },
},
positions: [
language: "TypeScript",
cursorPosition: { line: 10, character: 24 },
definitionPositions: [
{ row: 9, column: 34 }, // Person
{ row: 9, column: 44 }, // Address
],
},
{
description: "class method",
nodeType: "method_declaration",
fileName: "file1.ts",
range: {
start: { line: 22, character: 4 },
end: { line: 22, character: 30 },
},
positions: [
language: "TypeScript",
cursorPosition: { line: 22, character: 30 },
definitionPositions: [
{ row: 13, column: 29 }, // BaseClass
{ row: 13, column: 55 }, // FirstInterface
{ row: 13, column: 72 }, // SecondInterface
{ row: 21, column: 33 }, // Person
{ row: 21, column: 43 }, // Address
],
},
{
nodeType: "function_definition",
fileName: "file1.py",
language: "Python",
cursorPosition: { line: 4, character: 25 },
definitionPositions: [
{ row: 3, column: 30 }, // Person
{ row: 3, column: 42 }, // Address
],
},
{
nodeType: "function_definition (inside a class)",
fileName: "file1.py",
language: "Python",
cursorPosition: { line: 12, character: 33 },
definitionPositions: [
{ row: 6, column: 21 }, // BaseClass
{ row: 6, column: 33 }, // Collection
{ row: 11, column: 47 }, // Person
{ row: 11, column: 59 }, // Address
],
},
{
nodeType: "function_definition",
fileName: "file1.php",
language: "PHP",
cursorPosition: { line: 12, character: 32 },
definitionPositions: [
{ row: 10, column: 26 }, // Person
{ row: 10, column: 44 }, // Address
],
},
{
nodeType: "function_definition (inside a class)",
fileName: "file1.php",
language: "PHP",
cursorPosition: { line: 26, character: 35 },
definitionPositions: [
{ row: 15, column: 29 }, // BaseClass
{ row: 15, column: 55 }, // FirstInterface
{ row: 15, column: 72 }, // SecondInterface
{ row: 25, column: 43 }, // Person
{ row: 25, column: 61 }, // Address
],
},
];

describe("RootPathContextService", () => {
describe("TypeScript should return expected snippets when editing inside a:", () => {
test.each(TEST_CASES)(
"should look for correct type definitions when editing inside a $description",
async ({ fileName, range, positions }) => {
await testRootPathContext("typescript", fileName, range, positions);
},
);
});
test.each(TEST_CASES)(
"Should look for correct type definitions when editing inside a $nodeType in $language",
async ({ fileName, cursorPosition, definitionPositions }) => {
await testRootPathContext(
"files",
fileName,
cursorPosition,
definitionPositions,
);
},
);
});
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# File: base_module.py

class BaseClass:
def __init__(self):
print("BaseClass initialized")

class Collection:
def __init__(self):
print("Collection initialized")

class Address:
def __init__(self, street: str, city: str, zip_code: str):
self.street = street
self.city = city
self.zip_code = zip_code

def __str__(self):
return f"{self.street}, {self.city}, {self.zip_code}"

class Person:
def __init__(self, name: str, address: Address):
self.name = name
self.address = address

def __str__(self):
return f"{self.name} lives at {self.address}"
32 changes: 32 additions & 0 deletions core/autocomplete/context/root-path-context/test/files/file1.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
<?php

namespace BaseNamespace;

use BaseNamespace\BaseClass;
use BaseNamespace\Interfaces\FirstInterface;
use BaseNamespace\Interfaces\SecondInterface;
use BaseNamespace\Person;
use BaseNamespace\Address;

function getAddress(Person $person): Address
{
return $person->getAddress();
}

class Group extends BaseClass implements FirstInterface, SecondInterface
{
private array $people;

public function __construct(array $people)
{
parent::__construct();
$this->people = $people;
}

public function getPersonAddress(Person $person): Address
{
return getAddress($person);
}
}

?>
13 changes: 13 additions & 0 deletions core/autocomplete/context/root-path-context/test/files/file1.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
from .base_module import BaseClass, Collection, Person, Address
from typing import List

def get_address(person: Person) -> Address:
return person.address

class Group(BaseClass, Collection):
def __init__(self, people: List[Person]) -> None:
super().__init__()
self.people = people

def get_person_address(self, person: Person) -> Address:
return get_address(person)
Loading

0 comments on commit 689f3f0

Please sign in to comment.