forked from langchain-ai/langchain
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' into feature/add-weaviate-certainty
* master: (28 commits) bump version to 0094 (langchain-ai#1280) feat: document loader for MS Word documents (langchain-ai#1282) cleanup (langchain-ai#1274) Harrison/cohere params (langchain-ai#1278) Harrison/logprobs (langchain-ai#1279) Harrison/fb loader (langchain-ai#1277) Harrison/errors (langchain-ai#1276) adding .ipynb loader and documentation Fixes langchain-ai#1248 (langchain-ai#1252) Harrison/source docs (langchain-ai#1275) Add Writer, Banana, Modal, StochasticAI (langchain-ai#1270) searx: add `query_suffix` parameter (langchain-ai#1259) fix bug with length function (langchain-ai#1257) docs: remove nltk download steps (langchain-ai#1253) added caching and properties docs (langchain-ai#1255) bump version to 0093 (langchain-ai#1251) Add DeepInfra LLM support (langchain-ai#1232) docs: add Graphsignal ecosystem page (langchain-ai#1228) fix to specific language transcript (langchain-ai#1231) add ifttt tool (langchain-ai#1244) Don't instruct LLM to use the LIMIT clause, which is incompatible with SQL Server (langchain-ai#1242) ...
- Loading branch information
Showing
69 changed files
with
2,846 additions
and
46 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
# Banana | ||
|
||
This page covers how to use the Banana ecosystem within LangChain. | ||
It is broken into two parts: installation and setup, and then references to specific Banana wrappers. | ||
|
||
## Installation and Setup | ||
- Install with `pip3 install banana-dev` | ||
- Get an CerebriumAI api key and set it as an environment variable (`BANANA_API_KEY`) | ||
|
||
## Define your Banana Template | ||
|
||
If you want to use an available language model template you can find one [here](https://app.banana.dev/templates/conceptofmind/serverless-template-palmyra-base). | ||
This template uses the Palmyra-Base model by [Writer](https://writer.com/product/api/). | ||
You can check out an example Banana repository [here](https://github.com/conceptofmind/serverless-template-palmyra-base). | ||
|
||
## Build the Banana app | ||
|
||
You must include a output in the result. There is a rigid response structure. | ||
```python | ||
# Return the results as a dictionary | ||
result = {'output': result} | ||
``` | ||
|
||
An example inference function would be: | ||
```python | ||
def inference(model_inputs:dict) -> dict: | ||
global model | ||
global tokenizer | ||
|
||
# Parse out your arguments | ||
prompt = model_inputs.get('prompt', None) | ||
if prompt == None: | ||
return {'message': "No prompt provided"} | ||
|
||
# Run the model | ||
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda() | ||
output = model.generate( | ||
input_ids, | ||
max_length=100, | ||
do_sample=True, | ||
top_k=50, | ||
top_p=0.95, | ||
num_return_sequences=1, | ||
temperature=0.9, | ||
early_stopping=True, | ||
no_repeat_ngram_size=3, | ||
num_beams=5, | ||
length_penalty=1.5, | ||
repetition_penalty=1.5, | ||
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]] | ||
) | ||
|
||
result = tokenizer.decode(output[0], skip_special_tokens=True) | ||
# Return the results as a dictionary | ||
result = {'output': result} | ||
return result | ||
``` | ||
|
||
You can find a full example of a Banana app [here](https://github.com/conceptofmind/serverless-template-palmyra-base/blob/main/app.py). | ||
|
||
|
||
## Wrappers | ||
|
||
### LLM | ||
|
||
There exists an Banana LLM wrapper, which you can access with | ||
```python | ||
from langchain.llms import Banana | ||
``` | ||
|
||
You need to provide a model key located in the dashboard: | ||
```python | ||
llm = Banana(model_key="YOUR_MODEL_KEY") | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# DeepInfra | ||
|
||
This page covers how to use the DeepInfra ecosystem within LangChain. | ||
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers. | ||
|
||
## Installation and Setup | ||
- Get your DeepInfra api key from this link [here](https://deepinfra.com/). | ||
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`) | ||
|
||
## Wrappers | ||
|
||
### LLM | ||
|
||
There exists an DeepInfra LLM wrapper, which you can access with | ||
```python | ||
from langchain.llms import DeepInfra | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
# Graphsignal | ||
|
||
This page covers how to use the Graphsignal to trace and monitor LangChain. | ||
|
||
## Installation and Setup | ||
|
||
- Install the Python library with `pip install graphsignal` | ||
- Create free Graphsignal account [here](https://graphsignal.com) | ||
- Get an API key and set it as an environment variable (`GRAPHSIGNAL_API_KEY`) | ||
|
||
## Tracing and Monitoring | ||
|
||
Graphsignal automatically instruments and starts tracing and monitoring chains. Traces, metrics and errors are then available in your [Graphsignal dashboard](https://app.graphsignal.com/). No prompts or other sensitive data are sent to Graphsignal cloud, only statistics and metadata. | ||
|
||
Initialize the tracer by providing a deployment name: | ||
|
||
```python | ||
import graphsignal | ||
|
||
graphsignal.configure(deployment='my-langchain-app-prod') | ||
``` | ||
|
||
In order to trace full runs and see a breakdown by chains and tools, you can wrap the calling routine or use a decorator: | ||
|
||
```python | ||
with graphsignal.start_trace('my-chain'): | ||
chain.run("some initial text") | ||
``` | ||
|
||
Optionally, enable profiling to record function-level statistics for each trace. | ||
|
||
```python | ||
with graphsignal.start_trace( | ||
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)): | ||
chain.run("some initial text") | ||
``` | ||
|
||
See the [Quick Start](https://graphsignal.com/docs/guides/quick-start/) guide for complete setup instructions. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
# Modal | ||
|
||
This page covers how to use the Modal ecosystem within LangChain. | ||
It is broken into two parts: installation and setup, and then references to specific Modal wrappers. | ||
|
||
## Installation and Setup | ||
- Install with `pip install modal-client` | ||
- Run `modal token new` | ||
|
||
## Define your Modal Functions and Webhooks | ||
|
||
You must include a prompt. There is a rigid response structure. | ||
|
||
```python | ||
class Item(BaseModel): | ||
prompt: str | ||
|
||
@stub.webhook(method="POST") | ||
def my_webhook(item: Item): | ||
return {"prompt": my_function.call(item.prompt)} | ||
``` | ||
|
||
An example with GPT2: | ||
|
||
```python | ||
from pydantic import BaseModel | ||
|
||
import modal | ||
|
||
stub = modal.Stub("example-get-started") | ||
|
||
volume = modal.SharedVolume().persist("gpt2_model_vol") | ||
CACHE_PATH = "/root/model_cache" | ||
|
||
@stub.function( | ||
gpu="any", | ||
image=modal.Image.debian_slim().pip_install( | ||
"tokenizers", "transformers", "torch", "accelerate" | ||
), | ||
shared_volumes={CACHE_PATH: volume}, | ||
retries=3, | ||
) | ||
def run_gpt2(text: str): | ||
from transformers import GPT2Tokenizer, GPT2LMHeadModel | ||
tokenizer = GPT2Tokenizer.from_pretrained('gpt2') | ||
model = GPT2LMHeadModel.from_pretrained('gpt2') | ||
encoded_input = tokenizer(text, return_tensors='pt').input_ids | ||
output = model.generate(encoded_input, max_length=50, do_sample=True) | ||
return tokenizer.decode(output[0], skip_special_tokens=True) | ||
|
||
class Item(BaseModel): | ||
prompt: str | ||
|
||
@stub.webhook(method="POST") | ||
def get_text(item: Item): | ||
return {"prompt": run_gpt2.call(item.prompt)} | ||
``` | ||
|
||
## Wrappers | ||
|
||
### LLM | ||
|
||
There exists an Modal LLM wrapper, which you can access with | ||
```python | ||
from langchain.llms import Modal | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# StochasticAI | ||
|
||
This page covers how to use the StochasticAI ecosystem within LangChain. | ||
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers. | ||
|
||
## Installation and Setup | ||
- Install with `pip install stochasticx` | ||
- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`) | ||
|
||
## Wrappers | ||
|
||
### LLM | ||
|
||
There exists an StochasticAI LLM wrapper, which you can access with | ||
```python | ||
from langchain.llms import StochasticAI | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
# Writer | ||
|
||
This page covers how to use the Writer ecosystem within LangChain. | ||
It is broken into two parts: installation and setup, and then references to specific Writer wrappers. | ||
|
||
## Installation and Setup | ||
- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`) | ||
|
||
## Wrappers | ||
|
||
### LLM | ||
|
||
There exists an Writer LLM wrapper, which you can access with | ||
```python | ||
from langchain.llms import Writer | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
64 changes: 64 additions & 0 deletions
64
docs/modules/document_loaders/examples/example_data/facebook_chat.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
{ | ||
"participants": [{"name": "User 1"}, {"name": "User 2"}], | ||
"messages": [ | ||
{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}, | ||
{ | ||
"sender_name": "User 1", | ||
"timestamp_ms": 1675597435669, | ||
"content": "Oh no worries! Bye", | ||
}, | ||
{ | ||
"sender_name": "User 2", | ||
"timestamp_ms": 1675596277579, | ||
"content": "No Im sorry it was my mistake, the blue one is not for sale", | ||
}, | ||
{ | ||
"sender_name": "User 1", | ||
"timestamp_ms": 1675595140251, | ||
"content": "I thought you were selling the blue one!", | ||
}, | ||
{ | ||
"sender_name": "User 1", | ||
"timestamp_ms": 1675595109305, | ||
"content": "Im not interested in this bag. Im interested in the blue one!", | ||
}, | ||
{ | ||
"sender_name": "User 2", | ||
"timestamp_ms": 1675595068468, | ||
"content": "Here is $129", | ||
}, | ||
{ | ||
"sender_name": "User 2", | ||
"timestamp_ms": 1675595060730, | ||
"photos": [ | ||
{"uri": "url_of_some_picture.jpg", "creation_timestamp": 1675595059} | ||
], | ||
}, | ||
{ | ||
"sender_name": "User 2", | ||
"timestamp_ms": 1675595045152, | ||
"content": "Online is at least $100", | ||
}, | ||
{ | ||
"sender_name": "User 1", | ||
"timestamp_ms": 1675594799696, | ||
"content": "How much do you want?", | ||
}, | ||
{ | ||
"sender_name": "User 2", | ||
"timestamp_ms": 1675577876645, | ||
"content": "Goodmorning! $50 is too low.", | ||
}, | ||
{ | ||
"sender_name": "User 1", | ||
"timestamp_ms": 1675549022673, | ||
"content": "Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!", | ||
}, | ||
], | ||
"title": "User 1 and User 2 chat", | ||
"is_still_participant": true, | ||
"thread_path": "inbox/User 1 and User 2 chat", | ||
"magic_words": [], | ||
"image": {"uri": "image_of_the_chat.jpg", "creation_timestamp": 1675549016}, | ||
"joinable_mode": {"mode": 1, "link": ""}, | ||
} |
Oops, something went wrong.