Skip to content

Commit e5e8c36

Browse files
committed
rerun docgen
1 parent 2a33a07 commit e5e8c36

File tree

4 files changed

+46
-12
lines changed

4 files changed

+46
-12
lines changed

common/api-review/ai.api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1032,7 +1032,7 @@ export interface TextPart {
10321032
text: string;
10331033
}
10341034

1035-
// @public
1035+
// @public (undocumented)
10361036
export interface ThinkingConfig {
10371037
thinkingBudget?: number;
10381038
}

docs-devsite/_toc.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -86,10 +86,10 @@ toc:
8686
path: /docs/reference/js/ai.groundingchunk.md
8787
- title: GroundingMetadata
8888
path: /docs/reference/js/ai.groundingmetadata.md
89-
- title: HybridParams
90-
path: /docs/reference/js/ai.hybridparams.md
9189
- title: GroundingSupport
9290
path: /docs/reference/js/ai.groundingsupport.md
91+
- title: HybridParams
92+
path: /docs/reference/js/ai.hybridparams.md
9393
- title: ImagenGCSImage
9494
path: /docs/reference/js/ai.imagengcsimage.md
9595
- title: ImagenGenerationConfig
@@ -128,10 +128,10 @@ toc:
128128
path: /docs/reference/js/ai.numberschema.md
129129
- title: ObjectSchema
130130
path: /docs/reference/js/ai.objectschema.md
131-
- title: OnDeviceParams
132-
path: /docs/reference/js/ai.ondeviceparams.md
133131
- title: ObjectSchemaRequest
134132
path: /docs/reference/js/ai.objectschemarequest.md
133+
- title: OnDeviceParams
134+
path: /docs/reference/js/ai.ondeviceparams.md
135135
- title: PromptFeedback
136136
path: /docs/reference/js/ai.promptfeedback.md
137137
- title: RequestOptions

docs-devsite/ai.md

Lines changed: 41 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,12 +76,12 @@ The Firebase AI Web SDK.
7676
| [GenerateContentStreamResult](./ai.generatecontentstreamresult.md#generatecontentstreamresult_interface) | Result object returned from [GenerativeModel.generateContentStream()](./ai.generativemodel.md#generativemodelgeneratecontentstream) call. Iterate over <code>stream</code> to get chunks as they come in and/or use the <code>response</code> promise to get the aggregated response when the stream is done. |
7777
| [GenerationConfig](./ai.generationconfig.md#generationconfig_interface) | Config options for content-related requests |
7878
| [GenerativeContentBlob](./ai.generativecontentblob.md#generativecontentblob_interface) | Interface for sending an image. |
79-
| [HybridParams](./ai.hybridparams.md#hybridparams_interface) | Toggles hybrid inference. |
8079
| [GoogleSearch](./ai.googlesearch.md#googlesearch_interface) | Specifies the Google Search configuration. |
8180
| [GoogleSearchTool](./ai.googlesearchtool.md#googlesearchtool_interface) | A tool that allows a Gemini model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.<!-- -->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: [Gemini Developer API](https://ai.google.dev/gemini-api/terms#grounding-with-google-search) or Vertex AI Gemini API (see [Service Terms](https://cloud.google.com/terms/service-terms) section within the Service Specific Terms). |
8281
| [GroundingChunk](./ai.groundingchunk.md#groundingchunk_interface) | Represents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled. |
8382
| [GroundingMetadata](./ai.groundingmetadata.md#groundingmetadata_interface) | Metadata returned when grounding is enabled.<!-- -->Currently, only Grounding with Google Search is supported (see [GoogleSearchTool](./ai.googlesearchtool.md#googlesearchtool_interface)<!-- -->).<!-- -->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: [Gemini Developer API](https://ai.google.dev/gemini-api/terms#grounding-with-google-search) or Vertex AI Gemini API (see [Service Terms](https://cloud.google.com/terms/service-terms) section within the Service Specific Terms). |
8483
| [GroundingSupport](./ai.groundingsupport.md#groundingsupport_interface) | Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks. |
84+
| [HybridParams](./ai.hybridparams.md#hybridparams_interface) | Toggles hybrid inference. |
8585
| [ImagenGCSImage](./ai.imagengcsimage.md#imagengcsimage_interface) | An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.<!-- -->This feature is not available yet. |
8686
| [ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface) | <b><i>(Public Preview)</i></b> Configuration options for generating images with Imagen.<!-- -->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images-imagen) for more details. |
8787
| [ImagenGenerationResponse](./ai.imagengenerationresponse.md#imagengenerationresponse_interface) | <b><i>(Public Preview)</i></b> The response from a request to generate images with Imagen. |
@@ -95,7 +95,7 @@ The Firebase AI Web SDK.
9595
| [LanguageModelMessage](./ai.languagemodelmessage.md#languagemodelmessage_interface) | |
9696
| [LanguageModelMessageContent](./ai.languagemodelmessagecontent.md#languagemodelmessagecontent_interface) | |
9797
| [ModalityTokenCount](./ai.modalitytokencount.md#modalitytokencount_interface) | Represents token counting info for a single modality. |
98-
| [ModelParams](./ai.modelparams.md#modelparams_interface) | Params passed to [getGenerativeModel()](./ai.md#getgenerativemodel_80bd839)<!-- -->. |
98+
| [ModelParams](./ai.modelparams.md#modelparams_interface) | Params passed to [getGenerativeModel()](./ai.md#getgenerativemodel_c63f46a)<!-- -->. |
9999
| [ObjectSchemaRequest](./ai.objectschemarequest.md#objectschemarequest_interface) | Interface for JSON parameters in a schema of "object" when not using the <code>Schema.object()</code> helper. |
100100
| [OnDeviceParams](./ai.ondeviceparams.md#ondeviceparams_interface) | Encapsulates configuration for on-device inference. |
101101
| [PromptFeedback](./ai.promptfeedback.md#promptfeedback_interface) | If the prompt was blocked, this will be populated with <code>blockReason</code> and the relevant <code>safetyRatings</code>. |
@@ -111,7 +111,7 @@ The Firebase AI Web SDK.
111111
| [Segment](./ai.segment.md#segment_interface) | Represents a specific segment within a [Content](./ai.content.md#content_interface) object, often used to pinpoint the exact location of text or data that grounding information refers to. |
112112
| [StartChatParams](./ai.startchatparams.md#startchatparams_interface) | Params for [GenerativeModel.startChat()](./ai.generativemodel.md#generativemodelstartchat)<!-- -->. |
113113
| [TextPart](./ai.textpart.md#textpart_interface) | Content part interface if the part represents a text string. |
114-
| [ThinkingConfig](./ai.thinkingconfig.md#thinkingconfig_interface) | Configuration for "thinking" behavior of compatible Gemini models.<!-- -->Certain models utilize a thinking process before generating a response. This allows them to reason through complex problems and plan a more coherent and accurate answer. |
114+
| [ThinkingConfig](./ai.thinkingconfig.md#thinkingconfig_interface) | |
115115
| [ToolConfig](./ai.toolconfig.md#toolconfig_interface) | Tool config. This config is shared for all tools provided in the request. |
116116
| [UsageMetadata](./ai.usagemetadata.md#usagemetadata_interface) | Usage metadata about a [GenerateContentResponse](./ai.generatecontentresponse.md#generatecontentresponse_interface)<!-- -->. |
117117
| [VideoMetadata](./ai.videometadata.md#videometadata_interface) | Describes the input video content. |
@@ -157,6 +157,10 @@ The Firebase AI Web SDK.
157157
| [ImagenAspectRatio](./ai.md#imagenaspectratio) | <b><i>(Public Preview)</i></b> Aspect ratios for Imagen images.<!-- -->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your [ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface)<!-- -->.<!-- -->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details and examples of the supported aspect ratios. |
158158
| [ImagenPersonFilterLevel](./ai.md#imagenpersonfilterlevel) | <b><i>(Public Preview)</i></b> A filter level controlling whether generation of images containing people or faces is allowed.<!-- -->See the <a href="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details. |
159159
| [ImagenSafetyFilterLevel](./ai.md#imagensafetyfilterlevel) | <b><i>(Public Preview)</i></b> A filter level controlling how aggressively to filter sensitive content.<!-- -->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) and the [Responsible AI and usage guidelines](https://cloud.google.com/vertex-ai/generative-ai/docs/image/responsible-ai-imagen#safety-filters) for more details. |
160+
| [InferenceMode](./ai.md#inferencemode) | Determines whether inference happens on-device or in-cloud. |
161+
| [LanguageModelMessageContentValue](./ai.md#languagemodelmessagecontentvalue) | |
162+
| [LanguageModelMessageRole](./ai.md#languagemodelmessagerole) | |
163+
| [LanguageModelMessageType](./ai.md#languagemodelmessagetype) | |
160164
| [Modality](./ai.md#modality) | Content part modality. |
161165
| [Part](./ai.md#part) | Content part - includes text, image/video, or function call/response part types. |
162166
| [ResponseModality](./ai.md#responsemodality) | <b><i>(Public Preview)</i></b> Generation modalities to be returned in generation responses. |
@@ -700,6 +704,40 @@ Text prompts provided as inputs and images (generated or uploaded) through Image
700704
export type ImagenSafetyFilterLevel = (typeof ImagenSafetyFilterLevel)[keyof typeof ImagenSafetyFilterLevel];
701705
```
702706

707+
## InferenceMode
708+
709+
Determines whether inference happens on-device or in-cloud.
710+
711+
<b>Signature:</b>
712+
713+
```typescript
714+
export type InferenceMode = 'prefer_on_device' | 'only_on_device' | 'only_in_cloud';
715+
```
716+
717+
## LanguageModelMessageContentValue
718+
719+
<b>Signature:</b>
720+
721+
```typescript
722+
export type LanguageModelMessageContentValue = ImageBitmapSource | AudioBuffer | BufferSource | string;
723+
```
724+
725+
## LanguageModelMessageRole
726+
727+
<b>Signature:</b>
728+
729+
```typescript
730+
export type LanguageModelMessageRole = 'system' | 'user' | 'assistant';
731+
```
732+
733+
## LanguageModelMessageType
734+
735+
<b>Signature:</b>
736+
737+
```typescript
738+
export type LanguageModelMessageType = 'text' | 'image' | 'audio';
739+
```
740+
703741
## Modality
704742

705743
Content part modality.

docs-devsite/ai.thinkingconfig.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,6 @@ https://github.com/firebase/firebase-js-sdk
1010
{% endcomment %}
1111

1212
# ThinkingConfig interface
13-
Configuration for "thinking" behavior of compatible Gemini models.
14-
15-
Certain models utilize a thinking process before generating a response. This allows them to reason through complex problems and plan a more coherent and accurate answer.
16-
1713
<b>Signature:</b>
1814

1915
```typescript

0 commit comments

Comments
 (0)