Amazon Bedrock: The simplest way to build Generative AI-powered apps with foundation models
Published On: October 5th, 2023
In this blog post, we’ll delve into the integration of Amazon Bedrock with Lambda for a wide range of business or end-user applications. However, before we dive into the details, let’s first take a closer look at what Bedrock entails.
Amazon Bedrock is a fully managed service that makes it easy to build and scale generative AI applications. It offers a choice of high-performing foundation models from leading AI companies, as well as a broad set of capabilities for customising and deploying generative AI models. Amazon Bedrock is serverless, so you don’t have to manage any infrastructure.
What are some of the key features of Amazon Bedrock?
Bedrock offers a number of key features, including:
Access to a variety of foundation models: Bedrock provides access to a variety of Foundational Models from leading AI companies, including Amazon, AI21 Labs, Anthropic, Cohere, Meta, and Stability AI.
Easy model customisation: Bedrock makes it easy to customise Foundational Models with your own data without writing any code.
Fully managed agents: Bedrock provides fully managed agents that can execute complex business tasks—such as booking travel, processing insurance claims, creating ad campaigns, and managing inventory—without writing any code.
Data security and compliance certifications: Bedrock is a secure and compliant service and has achieved HIPAA eligibility and GDPR compliance. With Amazon Bedrock, your content is not used to improve the base models and is not shared with third-party model providers. Your data in Amazon Bedrock is always encrypted in transit and at rest, and you can encrypt the data using your own keys.
Native support for RAG: Bedrock provides native support for retrieval-augmented generation (RAG), which allows you to extend the power of FMs with your own proprietary data.
Here are some examples of how Amazon Bedrock can be used without any code:
Generate text: You can use the Bedrock console to generate text based on a prompt. For example, you could generate a product description, a marketing copy, or a creative story.
Answer questions: You can use the Bedrock console to answer questions in an informative way. For example, you could build a chatbot that can answer customer questions about your products or services.
Image Generation: You can use Bedrock console to generate images based on the description. For example, you could provide a description to generate imaginative art.
At the time of writing this blog, Bedrock is only available in US West(Oregon), Asia Pacific (Tokyo, Singapore), US East(N. Virginia, Ohio).
Initially, Bedrock does not allow access to any foundational models. Access to foundational models must be manually requested as shown below
Depending on the third-party model provider, access to the foundation models can take any time between a few minutes to few hours.
Amazon bedrock provides various examples of different foundational models. It also provides a playground in AWS console which can be used for quick experimentation with various Foundational models
Playground allows you to run various example inferences. Different foundational models have different inference configurations. Amazon provides with an example prompt and default inference configuration for experimentation. Playground can also show the result for a specific prompt and also give you an option to view the API request to produce the result.
Integrating AWS Bedrock with Lambda
The python API calls for Bedrock is available from boto3 >= 1.28.57. At the time of writing this blog, the latest Lambda python runtime version has boto3 == 1.27.1 installed. Until AWS updates the boto3 version in Lambda python runtime, we would need to make boto3 >= 1.28.57 available in the runtime before we can invoke Bedrock. AWS has a lot of documentation on how you can install python packages and build layers for Lambda. We’ve also written a CloudFormation script that installs the latest boto3 package in Lambda and deploys a sample application code for inference. You can check this out in our GitHub.
Bedrock offers various foundational models to use and each model can have a different inference configuration. Our sample lambda code (located at the end of this blog) accepts inference configuration suitable for the jurassic2-ultra-v1 model such as temperature, topP, max_completion_length, stop_sequences etc. The Lambda code applies default values if no inference configuration is provided.
In our previous blog, we fine-tuned a GPT-J 6B text generation model from AWS Jumpstart with on Herman Melville’s “Moby Dick”. In this blog, we have extracted Chapter 1 text from the same novel and passed it as input to jurassic2-ultra-v1 model offered by AI21 to summarise the whole chapter in 5 bullet points. Below is the input passed to the Lambda function to invoke the Bedrock model.
Copy to Clipboard
And below is the response we got from the model:
Ishmael, the narrator, begins by explaining his reasoning for going to sea. He feels that it is a way to regulate his mood, especially when he finds himself growing grim about the mouth or feeling melancholy.
He describes the city of Manhattan, surrounded by water, and the way that people are drawn to the water, even to the edges of the city.
He acknowledges that most people have a connection to the ocean, and that many are drawn to it.
He describes the connection between water and meditation, and the way that water can be a source of magic and reverie.
He explains why he chooses to sail as a lowly sailor rather than as a passenger, officer, or cook, and the way that he feels connected to the sea and his fellow sailors.
The above summarisation is clearly an accurate representation of the provided input. If you are interested in building generative AI applications, check out Amazon Bedrock. It is an easy-to-use service that provides access to a variety of powerful models.
We have incredibly skilled ML specialized consultants who can help you implement ML solutions from start to finish. Talk to one of our experts to see how we can assist your organization in its ML journey.