From 50520c7c1cb4e3f9353a96cb33cc8b9d18fc0ef8 Mon Sep 17 00:00:00 2001 From: Kian-Meng Ang Date: Sun, 8 Sep 2024 23:44:43 +0800 Subject: [PATCH] Fix typos (#567) Found via `codespell -H -L wit,thre` !stable-docs --- docs/changelog.md | 2 +- docs/plugins/tutorial-model-plugin.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/changelog.md b/docs/changelog.md index c3ed2b2e..f8e38b4c 100644 --- a/docs/changelog.md +++ b/docs/changelog.md @@ -177,7 +177,7 @@ To create embeddings for every JPEG in a directory stored in a `photos` collecti llm install llm-clip llm embed-multi photos --files photos/ '*.jpg' --binary -m clip ``` -Now you can search for photos of racoons using: +Now you can search for photos of raccoons using: ``` llm similar photos -c 'raccoon' ``` diff --git a/docs/plugins/tutorial-model-plugin.md b/docs/plugins/tutorial-model-plugin.md index a2f78df7..ff9c17fb 100644 --- a/docs/plugins/tutorial-model-plugin.md +++ b/docs/plugins/tutorial-model-plugin.md @@ -135,7 +135,7 @@ We can try that out by pasting it into the interactive Python interpreter and ru To execute the model, we start with a word. We look at the options for words that might come next and pick one of those at random. Then we repeat that process until we have produced the desired number of output words. -Some words might not have any following words from our training sentence. For our implementation we wil fall back on picking a random word from our collection. +Some words might not have any following words from our training sentence. For our implementation we will fall back on picking a random word from our collection. We will implement this as a [Python generator](https://realpython.com/introduction-to-python-generators/), using the yield keyword to produce each token: ```python