You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Reflect the cost of cache read/write to the price calculation.
* Implement prompt caching.
- Backend
- Add a field `use_prompt_caching` to models and schemas of custom bot.
- `BotModel`
- `BotInput`
- `BotOutput`
- `BotModifyInput`
- `BotModifyOutput`
- Add a column `UsePromptCaching` to DynamoDB table `BotTableV3`.
- Use prompt caching if enabled and the model supports it.
- Frontend
- Add a field `usePromptCaching` to schemas of custom bot.
- `BotDetails`
- `RegisterBotRequest`
- `RegisterBotResponse`
- `UpdateBotRequest`
- `UpdateBotResponse`
- Add 'Prompt Caching' section to `BotKbEditPage`.
* [Debug]Print token count and price when received `STREAMING_END`.
* Reformat modified python codes.
- `backend/app/config.py`
- `backend/app/bedrock.py`
- `backend/app/repositories/custom_bot.py`
- `backend/app/usecases/chat.py`
* Use `ExpandableDrawerGroup` for prompt caching settings.
* Revert "[Debug]Print token count and price when received `STREAMING_END`."
This reverts commit ba5e584.
* Refactor data structure of agent settings.
* Rename `usePromptCaching` to `promptCachingEnabled`.
- `use_prompt_caching` -> `prompt_caching_enabled`
- Change `BotModel.prompt_caching_enabled` to a non-nullable type.
* Change `BotModifyInput.prompt_caching_enabled` to a non-nullable type.
* Add `token_count` and `price` to the payload of `STREAMING_END` notification.
# NOTE: Some models doesn't support tool use. https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html
0 commit comments