-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Token Sequence Length Exceeds Limit Despite model_tokens
Parameter in Ollama Model
#768
Comments
please update to the new examples |
they are legacy |
Can you point me towards the latest examples? I followed this one. |
look at this link https://github.com/ScrapeGraphAI/Scrapegraph-ai/tree/main/examples |
I've looked into the examples, and I noticed that in this example and other examples related to Ollama, the context window is set using I really like your project, but without being able to increase the context window to make full use of the model, I won’t be able to use this framework effectively. Could you please provide a short code snippet or guidance on changing the context length in the latest version? |
ok, can you specify the context_window inside? like this graph_config = { |
Btw which model of mistral are you using? |
As you can see from my example, I followed this procedure. I attempted to execute it without embedding for debugging purposes, however, the identical error persists. I am using ollama version 0.3.14.
I just use the latest model of mistral, but I also tried llama3.1:8b and 70b which has a context length of 128k and also the gemma2:9b. |
@JSchmie I fixed this in #773. The The pull request will be merged into the development branch ( |
I have installed your branch using. pip install --force-reinstall git+https://github.com/ScrapeGraphAI/Scrapegraph-ai.git@768-fix-model-tokens But unfortunately, I cannot confirm that it works. I still get the error: Token indices sequence length is longer than the specified maximum sequence length for this model (11148 > 1024). Running this sequence through the model will result in indexing errors I can confirm that |
hi please update to the new beta |
## [1.27.0-beta.13](v1.27.0-beta.12...v1.27.0-beta.13) (2024-10-29) ### Bug Fixes * **AbstractGraph:** manually select model tokens ([f79f399](f79f399)), closes [#768](#768)
🎉 This issue has been resolved in version 1.27.0-beta.13 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
@VinciGit00 I tried that adjustment, and while the error persists:
the results are looking significantly better now! Could it be that this error is being thrown unintentionally? |
@JSchmie the error is coming from LangChain and not from ScrapeGraphAI. Using |
Yes, but the error still occurs when I am using: graph_config = {
"llm": {
"model": "ollama/llama3.1:8b",
"temperature": 1,
"format": "json",
'model_tokens' : 128000,
"base_url": ollama_base_url
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"base_url": ollama_base_url
},
} |
Update: I believe I found a crucial issue, which may stem from Ollama itself. In their API documentation, they note:
Until now, I wasn't aware of this limitation. If the model doesn’t respond in JSON, it outputs a series of newline characters. Given that inputs can sometimes be quite large, the model might ignore the instruction to respond in JSON, potentially leading to significant quality discrepancies. Interestingly, when using LangChain directly, this issue doesn’t occur, and the context length is applied correctly. I’ve included the code below, which may be helpful for debugging. import requests
from bs4 import BeautifulSoup
from langchain_ollama import ChatOllama
# Define the URL to fetch content from
url = "https://github.com/ScrapeGraphAI/Scrapegraph-ai"
# Send a GET request to fetch the raw HTML content from the URL
response = requests.get(url)
response.raise_for_status() # Raise an exception if an HTTP error occurs
# Parse the HTML content with BeautifulSoup
soup = BeautifulSoup(response.content, "html.parser")
# Extract and clean up text content from HTML, removing tags and adding line breaks
text_content = soup.get_text(separator='\n', strip=True)
# Create a prompt to ask the language model (LLM) what the website is about
# JSON format is explicitly requested in the prompt
promt = f"""
USE JSON!!!
What is this website about?
{text_content}
"""
# Initialize the language model with specific configurations
llm = ChatOllama(
base_url='http://localhost:11434', # Specify the base URL for the LLM server
model='llama3.1:8b', # Define the model to use
num_ctx=128000, # Set the maximum context length for the LLM
format='json' # Request JSON output format from the LLM
)
# Invoke the LLM with the prompt and print its response
print(llm.invoke(promt)) The output looks like this: AIMessage(content='{ "type": "json", "result": { "website": "scrapegraphai.com", "library_name": "ScrapeGraphAI", "description": "A Python library for scraping leveraging large language models.", "license": "MIT license" } }\n\n \n\n\n\n\n\n \n\n\n\n ', additional_kwargs={}, response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-10-29T13:35:28.025049704Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 5262412590, 'load_duration': 4070193548, 'prompt_eval_count': 2328, 'prompt_eval_duration': 385116000, 'eval_count': 61, 'eval_duration': 761894000}, id='run-34027abd-c2ea-433e-8eb5-3bb57b5e97a2-0', usage_metadata={'input_tokens': 2328, 'output_tokens': 61, 'total_tokens': 2389}) |
## [1.28.0-beta.1](v1.27.0...v1.28.0-beta.1) (2024-10-30) ### Features * add new mistral models ([6914170](6914170)) * refactoring of the base_graph ([12a6c18](12a6c18)) ### Bug Fixes * **AbstractGraph:** manually select model tokens ([f79f399](f79f399)), closes [#768](#768) ### CI * **release:** 1.27.0-beta.11 [skip ci] ([3b2cadc](3b2cadc)) * **release:** 1.27.0-beta.12 [skip ci] ([62369e3](62369e3)) * **release:** 1.27.0-beta.13 [skip ci] ([deed355](deed355)), closes [#768](#768)
🎉 This issue has been resolved in version 1.28.0-beta.1 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
## [1.28.0](v1.27.0...v1.28.0) (2024-11-01) ### Features * add new mistral models ([6914170](6914170)) * refactoring of the base_graph ([12a6c18](12a6c18)) * update generate answer ([7172b32](7172b32)) ### Bug Fixes * **AbstractGraph:** manually select model tokens ([f79f399](f79f399)), closes [#768](#768) ### CI * **release:** 1.27.0-beta.11 [skip ci] ([3b2cadc](3b2cadc)) * **release:** 1.27.0-beta.12 [skip ci] ([62369e3](62369e3)) * **release:** 1.27.0-beta.13 [skip ci] ([deed355](deed355)), closes [#768](#768) * **release:** 1.28.0-beta.1 [skip ci] ([8cbe582](8cbe582)), closes [#768](#768) [#768](#768) * **release:** 1.28.0-beta.2 [skip ci] ([7e3598d](7e3598d))
🎉 This issue has been resolved in version 1.28.0 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
Describe the bug
The
model_tokens
parameter in thegraph_config
dictionary is not being applied to the Ollama model within theSmartScraperGraph
setup. Despite settingmodel_tokens
to 128000, the output still shows an error indicating that the token sequence length exceeds the model's limit (2231 > 1024
), causing indexing errors.To Reproduce
Steps to reproduce the behavior:
SmartScraperGraph
using the code below.graph_config
dictionary, specifyingmodel_tokens: 128000
under the"llm"
section.smart_scraper_graph.run()
.Expected behavior
The
model_tokens
parameter should be applied to Ollama's model, ensuring that the model respects the 128000-token length specified without raising indexing errors.Code
Error Message
Desktop:
Additional context: Ollama typically uses the num_ctx parameter to set context length. It seems that model_tokens does not directly influence the model's context length, suggesting a possible oversight or misconfiguration in how the SmartScraperGraph handles token length parameters with Ollama models.
Thank you for taking the time to look into this issue! I appreciate any guidance or suggestions you can provide to help resolve this problem. Your assistance means a lot, and I'm looking forward to any insights you might have on how to apply the
model_tokens
parameter correctly with Ollama. Thanks again for your help!The text was updated successfully, but these errors were encountered: