To specify a foundation model in Amazon Bedrock, you include the modelId
parameter in your API request. This parameter identifies the exact model version you want to use, such as anthropic.claude-v2
for Claude 2 or amazon.titan-text-express-v1
for Amazon Titan. The model ID ensures Bedrock routes your request to the correct model and processes inputs/outputs in the expected format. Each model provider (e.g., Anthropic, AI21 Labs, Amazon) publishes unique model IDs, which you can find in Bedrock’s documentation or the AWS console.
When using the Bedrock API or SDK, you pass the modelId
directly in the request structure. For example, in Python with the AWS SDK (Boto3), you would structure a request like this:
import boto3
client = boto3.client('bedrock-runtime')
response = client.invoke_model(
modelId='anthropic.claude-v2',
body=json.dumps({
'prompt': 'Hello, world!',
'max_tokens_to_sample': 200
}),
contentType='application/json'
)
Here, modelId
explicitly selects Claude 2. If you wanted to use Amazon Titan instead, you’d replace it with amazon.titan-text-express-v1
and adjust the request body format to match Titan’s requirements (e.g., inputText
instead of prompt
). The modelId
acts as a switch determining which model processes your input, so using the correct ID and corresponding input schema is critical.
To avoid errors, always verify the following:
- Model availability: Check AWS documentation for supported model IDs in your region.
- Input format: Each model expects specific parameters (e.g., Claude uses
prompt
, Titan usesinputText
). - Permissions: Ensure your AWS role has
bedrock:InvokeModel
access for the targetmodelId
.
Using the wrong modelId
or mismatched input format will result in API errors. Tools like the Bedrock Playground in the AWS console let you test model IDs and input structures interactively before coding.