You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(anthropic): update to Claude 3.7 Sonnet and refactor API
- Add support for Claude 3.7 Sonnet model and make it the default
- Rename function-related APIs to tool-related APIs for consistency:
- Change functionCallbacks to toolCallbacks
- Change function to toolNames
- Replace FunctionCallingOptions with ToolCallingChatOptions
- Refactor AnthropicChatModel instantiation to use builder pattern
- Update tests to use latest model versions instead of dated versions
Signed-off-by: Christian Tzolov <[email protected]>
| spring.ai.anthropic.chat.options.model | This is the Anthropic Chat model to use. Supports: `claude-3-5-sonnet-20241022`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307` and the legacy `claude-2.1`, `claude-2.0` and `claude-instant-1.2` models. | `claude-3-opus-20240229`
105
+
| spring.ai.anthropic.chat.options.model | This is the Anthropic Chat model to use. Supports: `claude-3-7-sonnet-latest`, `claude-3-5-sonnet-latest`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307` and the legacy `claude-2.1`, `claude-2.0` and `claude-instant-1.2` models. | `claude-3-7-sonnet-latest`
106
106
| spring.ai.anthropic.chat.options.temperature | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. | 0.8
107
107
| spring.ai.anthropic.chat.options.max-tokens | The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. | 500
108
108
| spring.ai.anthropic.chat.options.stop-sequence | Custom text sequences that will cause the model to stop generating. Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn". If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence. | -
Copy file name to clipboardExpand all lines: spring-ai-spring-boot-autoconfigure/src/test/java/org/springframework/ai/autoconfigure/anthropic/tool/FunctionCallWithFunctionBeanIT.java
Copy file name to clipboardExpand all lines: spring-ai-spring-boot-autoconfigure/src/test/java/org/springframework/ai/autoconfigure/anthropic/tool/FunctionCallWithPromptFunctionIT.java
+1-1
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ void functionCallTest() {
58
58
"What's the weather like in San Francisco, in Paris and in Tokyo? Return the temperature in Celsius.");
0 commit comments