Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: how to configure Amazon Bedrock #10182

Closed
shenshaoyong opened this issue Sep 4, 2023 · 3 comments
Closed

Issue: how to configure Amazon Bedrock #10182

shenshaoyong opened this issue Sep 4, 2023 · 3 comments
Labels
🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature

Comments

@shenshaoyong
Copy link

Issue you'd like to raise.

how to configure Amazon Bedrock step by step:
Answers:
#1. Open terminal, install bedrock specific version boto3 SDK,langchain
curl -sS https://d2eo22ngex1n9g.cloudfront.net/Documentation/SDK/bedrock-python-sdk.zip > sdk.zip
sudo yum install unzip -y
unzip sdk.zip -d sdk
pip install --no-build-isolation --force-reinstall ./sdk/awscli--py3-none-any.whl ./sdk/boto3--py3-none-any.whl ./sdk/botocore-*-py3-none-any.whl
pip install --quiet langchain==0.0.249
#pip install 'jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3�pip install jupyter-ai # If you use JupyterLab 4
#2. change the default token count to 2048
vi ~/anaconda3/lib/python3.11/site-packages/langchain/llms/bedrock.py
change this line: input_body["max_tokens_to_sample"] = 2048
#3. aws configure default profile, make sure the aksk has enough permissions(BedrockFullAccess)
aws configure
#4.run %%ai in *.ipynb file on ec2/local machine [also can run in VSCODE] instead of SageMaker notebook instance / SageMaker Studio
%load_ext jupyter_ai
%%ai bedrock:anthropic.claude-v2
Write something about Amazon

Suggestion:

No response

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder 🤖:question A specific question about the codebase, product, project, or how to use a feature labels Sep 4, 2023
@dosubot
Copy link

dosubot bot commented Sep 4, 2023

🤖

Hello,

Thank you for your question. Here's a step-by-step guide on how to configure Amazon Bedrock with the LangChain Python framework:

  1. Install the boto3 library, which is the Amazon Web Services (AWS) SDK for Python. This allows Python developers to write software that makes use of services like Amazon S3, Amazon EC2, and others. You can install it using pip:
pip install boto3
  1. Import the Bedrock class from the langchain.llms.bedrock module and HumanMessage from langchain.schema:
from langchain.llms.bedrock import Bedrock
from langchain.schema import HumanMessage
  1. Create an instance of the Bedrock class. You need to provide the model_id parameter and optionally model_kwargs for model specific parameters. The model_id is the ID of the model you want to use from Amazon Bedrock:
chat = Bedrock(model_id="anthropic.claude-v2", model_kwargs={"temperature":0.1})
  1. You can now use the Bedrock instance to process a list of HumanMessage instances:
messages = [
    HumanMessage(
        content="Translate this sentence from English to French. I love programming."
    )
]
chat(messages)

In the context shared, the boto3 library in the LangChain framework is used to interact with Amazon Bedrock, a fully managed service that makes language models available via an API. The boto3 library is used in the BedrockEmbeddings class to authenticate and interact with the Bedrock service.

The boto3 library is installed and then used to create a BedrockEmbeddings object with the necessary credentials, region, and model ID:

from langchain.embeddings.bedrock import BedrockEmbeddings

be = BedrockEmbeddings(
    credentials_profile_name="default",
    region_name="us-east-1",
    model_id="amazon.titan-e1t-medium"
)

This BedrockEmbeddings object is then used to compute document and query embeddings using a Bedrock model:

embeddings = be.embed_documents(["This is a test document."])

So, in addition to interacting with AWS Lambda and Amazon S3, the boto3 library in the LangChain framework also provides the functionality to interact with Amazon Bedrock, allowing the framework to use language models provided by the Bedrock service for computing embeddings.

Please let me know if you have any other questions or need further clarification.

Best,
Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@ctodd
Copy link

ctodd commented Sep 6, 2023

Langchain 0.0.281 (released on 9/4/23) just added additional Bedrock support, namely BedrockChat. Not sure why you are attempting to use 0.0.249, but would recommend you try the latest version. Please note that streaming is not yet supported, and it is possible that API responses may be truncated as a result. This is just an observation and I could be wrong about this though.

Copy link

dosubot bot commented Dec 6, 2023

Hi, @shenshaoyong,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, I provided a detailed guide on configuring Amazon Bedrock with the LangChain Python framework, including the installation of the boto3 library and its usage in the LangChain framework. Additionally, ctodd mentioned that Langchain 0.0.281 added additional Bedrock support, specifically BedrockChat, and recommended using the latest version.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and cooperation.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 6, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 13, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature
Projects
None yet
Development

No branches or pull requests

2 participants