Skip to content

Commit 9714c0f

Browse files
committed
doc
1 parent a4fbb84 commit 9714c0f

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/user-guide/concepts/model-providers/llamaapi.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,10 +51,10 @@ The `model_config` configures the underlying model selected for inference. The s
5151
|------------|-------------|---------|---------|
5252
| `model_id` | ID of a model to use | `Llama-4-Maverick-17B-128E-Instruct-FP8` | [reference](https://llama.developer.meta.com/docs/)
5353
| `repetition_penalty` | Controls the likelyhood and generating repetitive responses. (minimum: 1, maximum: 2, default: 1) | `1` | [reference](https://llama.developer.meta.com/docs/api/chat)
54-
| `temperature` | Controls randomness of the response by setting a temperature. | 0.7 | [reference](https://llama.developer.meta.com/docs/api/chat)
55-
| `top_p` | Controls diversity of the response by setting a probability threshold when choosing the next token. | 0.9 | [reference](https://llama.developer.meta.com/docs/api/chat)
56-
| `max_completion_tokens` | The maximum number of tokens to generate. | 4096 | [reference](https://llama.developer.meta.com/docs/api/chat)
57-
| `top_k` | Only sample from the top K options for each subsequent token. | 10 | [reference](https://llama.developer.meta.com/docs/api/chat)
54+
| `temperature` | Controls randomness of the response by setting a temperature. | `0.7` | [reference](https://llama.developer.meta.com/docs/api/chat)
55+
| `top_p` | Controls diversity of the response by setting a probability threshold when choosing the next token. | `0.9` | [reference](https://llama.developer.meta.com/docs/api/chat)
56+
| `max_completion_tokens` | The maximum number of tokens to generate. | `4096` | [reference](https://llama.developer.meta.com/docs/api/chat)
57+
| `top_k` | Only sample from the top K options for each subsequent token. | `10` | [reference](https://llama.developer.meta.com/docs/api/chat)
5858

5959

6060
## Troubleshooting

0 commit comments

Comments
 (0)