You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/user-guide/concepts/model-providers/llamaapi.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,10 +51,10 @@ The `model_config` configures the underlying model selected for inference. The s
51
51
|------------|-------------|---------|---------|
52
52
| `model_id` | ID of a model to use | `Llama-4-Maverick-17B-128E-Instruct-FP8` | [reference](https://llama.developer.meta.com/docs/)
53
53
| `repetition_penalty` | Controls the likelyhood and generating repetitive responses. (minimum: 1, maximum: 2, default: 1) | `1` | [reference](https://llama.developer.meta.com/docs/api/chat)
54
-
| `temperature` | Controls randomness of the response by setting a temperature. | 0.7 | [reference](https://llama.developer.meta.com/docs/api/chat)
55
-
| `top_p` | Controls diversity of the response by setting a probability threshold when choosing the next token. | 0.9 | [reference](https://llama.developer.meta.com/docs/api/chat)
56
-
| `max_completion_tokens` | The maximum number of tokens to generate. | 4096 | [reference](https://llama.developer.meta.com/docs/api/chat)
57
-
| `top_k` | Only sample from the top K options for each subsequent token. | 10 | [reference](https://llama.developer.meta.com/docs/api/chat)
54
+
| `temperature` | Controls randomness of the response by setting a temperature. | `0.7` | [reference](https://llama.developer.meta.com/docs/api/chat)
55
+
| `top_p` | Controls diversity of the response by setting a probability threshold when choosing the next token. | `0.9` | [reference](https://llama.developer.meta.com/docs/api/chat)
56
+
| `max_completion_tokens` | The maximum number of tokens to generate. | `4096` | [reference](https://llama.developer.meta.com/docs/api/chat)
57
+
| `top_k` | Only sample from the top K options for each subsequent token. | `10` | [reference](https://llama.developer.meta.com/docs/api/chat)
0 commit comments