Skip to content

Commit 747524b

Browse files
dev-jonghoonparkilayaperumalg
authored andcommitted
update anthropic model version in example code
Signed-off-by: jonghoonpark <[email protected]>
1 parent 2394ac8 commit 747524b

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

Diff for: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/anthropic-chat.adoc

+2-2
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ The prefix `spring.ai.anthropic.chat` is the property prefix that lets you confi
102102
| Property | Description | Default
103103

104104
| spring.ai.anthropic.chat.enabled | Enable Anthropic chat model. | true
105-
| spring.ai.anthropic.chat.options.model | This is the Anthropic Chat model to use. Supports: `claude-3-7-sonnet-latest`, `claude-3-5-sonnet-latest`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307` and the legacy `claude-2.1`, `claude-2.0` and `claude-instant-1.2` models. | `claude-3-7-sonnet-latest`
105+
| spring.ai.anthropic.chat.options.model | This is the Anthropic Chat model to use. Supports: `claude-3-7-sonnet-latest`, `claude-3-5-sonnet-latest`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307` | `claude-3-7-sonnet-latest`
106106
| spring.ai.anthropic.chat.options.temperature | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. | 0.8
107107
| spring.ai.anthropic.chat.options.max-tokens | The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. | 500
108108
| spring.ai.anthropic.chat.options.stop-sequence | Custom text sequences that will cause the model to stop generating. Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn". If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence. | -
@@ -134,7 +134,7 @@ ChatResponse response = chatModel.call(
134134
new Prompt(
135135
"Generate the names of 5 famous pirates.",
136136
AnthropicChatOptions.builder()
137-
.model("claude-2.1")
137+
.model("claude-3-7-sonnet-latest")
138138
.temperature(0.4)
139139
.build()
140140
));

0 commit comments

Comments
 (0)