Skip to content

Commit

Permalink
Merge branch 'master' into feature/add-weaviate-certainty
Browse files Browse the repository at this point in the history
* master: (28 commits)
  bump version to 0094 (langchain-ai#1280)
  feat: document loader for MS Word documents (langchain-ai#1282)
  cleanup (langchain-ai#1274)
  Harrison/cohere params (langchain-ai#1278)
  Harrison/logprobs (langchain-ai#1279)
  Harrison/fb loader (langchain-ai#1277)
  Harrison/errors (langchain-ai#1276)
  adding .ipynb loader and documentation Fixes langchain-ai#1248 (langchain-ai#1252)
  Harrison/source docs (langchain-ai#1275)
  Add Writer, Banana, Modal, StochasticAI (langchain-ai#1270)
  searx: add `query_suffix` parameter (langchain-ai#1259)
  fix bug with length function (langchain-ai#1257)
  docs: remove nltk download steps (langchain-ai#1253)
  added caching and properties docs (langchain-ai#1255)
  bump version to 0093 (langchain-ai#1251)
  Add DeepInfra LLM support (langchain-ai#1232)
  docs: add Graphsignal ecosystem page (langchain-ai#1228)
  fix to specific language transcript (langchain-ai#1231)
  add ifttt tool (langchain-ai#1244)
  Don't instruct LLM to use the LIMIT clause, which is incompatible with SQL Server (langchain-ai#1242)
  ...
  • Loading branch information
mpuig committed Feb 25, 2023
2 parents 9be07ac + c5dd491 commit 95d2636
Show file tree
Hide file tree
Showing 69 changed files with 2,846 additions and 46 deletions.
74 changes: 74 additions & 0 deletions docs/ecosystem/bananadev.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Banana

This page covers how to use the Banana ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Banana wrappers.

## Installation and Setup
- Install with `pip3 install banana-dev`
- Get an CerebriumAI api key and set it as an environment variable (`BANANA_API_KEY`)

## Define your Banana Template

If you want to use an available language model template you can find one [here](https://app.banana.dev/templates/conceptofmind/serverless-template-palmyra-base).
This template uses the Palmyra-Base model by [Writer](https://writer.com/product/api/).
You can check out an example Banana repository [here](https://github.com/conceptofmind/serverless-template-palmyra-base).

## Build the Banana app

You must include a output in the result. There is a rigid response structure.
```python
# Return the results as a dictionary
result = {'output': result}
```

An example inference function would be:
```python
def inference(model_inputs:dict) -> dict:
global model
global tokenizer

# Parse out your arguments
prompt = model_inputs.get('prompt', None)
if prompt == None:
return {'message': "No prompt provided"}

# Run the model
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
output = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
temperature=0.9,
early_stopping=True,
no_repeat_ngram_size=3,
num_beams=5,
length_penalty=1.5,
repetition_penalty=1.5,
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
)

result = tokenizer.decode(output[0], skip_special_tokens=True)
# Return the results as a dictionary
result = {'output': result}
return result
```

You can find a full example of a Banana app [here](https://github.com/conceptofmind/serverless-template-palmyra-base/blob/main/app.py).


## Wrappers

### LLM

There exists an Banana LLM wrapper, which you can access with
```python
from langchain.llms import Banana
```

You need to provide a model key located in the dashboard:
```python
llm = Banana(model_key="YOUR_MODEL_KEY")
```
17 changes: 17 additions & 0 deletions docs/ecosystem/deepinfra.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# DeepInfra

This page covers how to use the DeepInfra ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.

## Installation and Setup
- Get your DeepInfra api key from this link [here](https://deepinfra.com/).
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`)

## Wrappers

### LLM

There exists an DeepInfra LLM wrapper, which you can access with
```python
from langchain.llms import DeepInfra
```
38 changes: 38 additions & 0 deletions docs/ecosystem/graphsignal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Graphsignal

This page covers how to use the Graphsignal to trace and monitor LangChain.

## Installation and Setup

- Install the Python library with `pip install graphsignal`
- Create free Graphsignal account [here](https://graphsignal.com)
- Get an API key and set it as an environment variable (`GRAPHSIGNAL_API_KEY`)

## Tracing and Monitoring

Graphsignal automatically instruments and starts tracing and monitoring chains. Traces, metrics and errors are then available in your [Graphsignal dashboard](https://app.graphsignal.com/). No prompts or other sensitive data are sent to Graphsignal cloud, only statistics and metadata.

Initialize the tracer by providing a deployment name:

```python
import graphsignal

graphsignal.configure(deployment='my-langchain-app-prod')
```

In order to trace full runs and see a breakdown by chains and tools, you can wrap the calling routine or use a decorator:

```python
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
```

Optionally, enable profiling to record function-level statistics for each trace.

```python
with graphsignal.start_trace(
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)):
chain.run("some initial text")
```

See the [Quick Start](https://graphsignal.com/docs/guides/quick-start/) guide for complete setup instructions.
32 changes: 32 additions & 0 deletions docs/ecosystem/helicone.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,35 @@ export OPENAI_API_BASE="https://oai.hconeai.com/v1"
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs.

![Helicone](../_static/HeliconeKeys.png)

## How to enable Helicone caching

```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"

llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))
```

[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching)

## How to use Helicone custom properties

```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"

llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))
```

[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties)
66 changes: 66 additions & 0 deletions docs/ecosystem/modal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Modal

This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.

## Installation and Setup
- Install with `pip install modal-client`
- Run `modal token new`

## Define your Modal Functions and Webhooks

You must include a prompt. There is a rigid response structure.

```python
class Item(BaseModel):
prompt: str

@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
```

An example with GPT2:

```python
from pydantic import BaseModel

import modal

stub = modal.Stub("example-get-started")

volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"

@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)

class Item(BaseModel):
prompt: str

@stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
```

## Wrappers

### LLM

There exists an Modal LLM wrapper, which you can access with
```python
from langchain.llms import Modal
```
4 changes: 2 additions & 2 deletions docs/ecosystem/petals.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ It is broken into two parts: installation and setup, and then references to spec

## Installation and Setup
- Install with `pip install petals`
- Get an Huggingface api key and set it as an environment variable (`HUGGINGFACE_API_KEY`)
- Get a Hugging Face api key and set it as an environment variable (`HUGGINGFACE_API_KEY`)

## Wrappers

Expand All @@ -14,4 +14,4 @@ It is broken into two parts: installation and setup, and then references to spec
There exists an Petals LLM wrapper, which you can access with
```python
from langchain.llms import Petals
```
```
17 changes: 17 additions & 0 deletions docs/ecosystem/stochasticai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# StochasticAI

This page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.

## Installation and Setup
- Install with `pip install stochasticx`
- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`)

## Wrappers

### LLM

There exists an StochasticAI LLM wrapper, which you can access with
```python
from langchain.llms import StochasticAI
```
4 changes: 0 additions & 4 deletions docs/ecosystem/unstructured.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,6 @@ This page is broken into two parts: installation and setup, and then references
- `poppler-utils`
- `tesseract-ocr`
- `libreoffice`
- Run the following to install NLTK dependencies. `unstructured` will handle this automatically
soon.
- `python -c "import nltk; nltk.download('punkt')"`
- `python -c "import nltk; nltk.download('averaged_perceptron_tagger')"`
- If you are parsing PDFs, run the following to install the `detectron2` model, which
`unstructured` uses for layout detection:
- `pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"`
Expand Down
16 changes: 16 additions & 0 deletions docs/ecosystem/writer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Writer

This page covers how to use the Writer ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.

## Installation and Setup
- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`)

## Wrappers

### LLM

There exists an Writer LLM wrapper, which you can access with
```python
from langchain.llms import Writer
```
2 changes: 1 addition & 1 deletion docs/modules/chains/key_concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ They vary greatly in complexity and are combination of generic, highly configura

## Sequential Chain
This is a specific type of chain where multiple other chains are run in sequence, with the outputs being added as inputs
to the next. A subtype of this type of chain is the `SimpleSequentialChain`, where all subchains have only one input and one output,
to the next. A subtype of this type of chain is the [`SimpleSequentialChain`](./generic/sequential_chains.html#simplesequentialchain), where all subchains have only one input and one output,
and the output of one is therefore used as sole input to the next chain.

Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
{
"participants": [{"name": "User 1"}, {"name": "User 2"}],
"messages": [
{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"},
{
"sender_name": "User 1",
"timestamp_ms": 1675597435669,
"content": "Oh no worries! Bye",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675596277579,
"content": "No Im sorry it was my mistake, the blue one is not for sale",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675595140251,
"content": "I thought you were selling the blue one!",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675595109305,
"content": "Im not interested in this bag. Im interested in the blue one!",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675595068468,
"content": "Here is $129",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675595060730,
"photos": [
{"uri": "url_of_some_picture.jpg", "creation_timestamp": 1675595059}
],
},
{
"sender_name": "User 2",
"timestamp_ms": 1675595045152,
"content": "Online is at least $100",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675594799696,
"content": "How much do you want?",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675577876645,
"content": "Goodmorning! $50 is too low.",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675549022673,
"content": "Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!",
},
],
"title": "User 1 and User 2 chat",
"is_still_participant": true,
"thread_path": "inbox/User 1 and User 2 chat",
"magic_words": [],
"image": {"uri": "image_of_the_chat.jpg", "creation_timestamp": 1675549016},
"joinable_mode": {"mode": 1, "link": ""},
}
Loading

0 comments on commit 95d2636

Please sign in to comment.