You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs-devsite/ai.md
+41-3Lines changed: 41 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,12 +76,12 @@ The Firebase AI Web SDK.
76
76
|[GenerateContentStreamResult](./ai.generatecontentstreamresult.md#generatecontentstreamresult_interface)| Result object returned from [GenerativeModel.generateContentStream()](./ai.generativemodel.md#generativemodelgeneratecontentstream) call. Iterate over <code>stream</code> to get chunks as they come in and/or use the <code>response</code> promise to get the aggregated response when the stream is done. |
77
77
|[GenerationConfig](./ai.generationconfig.md#generationconfig_interface)| Config options for content-related requests |
78
78
|[GenerativeContentBlob](./ai.generativecontentblob.md#generativecontentblob_interface)| Interface for sending an image. |
|[GoogleSearch](./ai.googlesearch.md#googlesearch_interface)| Specifies the Google Search configuration. |
81
80
|[GoogleSearchTool](./ai.googlesearchtool.md#googlesearchtool_interface)| A tool that allows a Gemini model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.<!---->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: [Gemini Developer API](https://ai.google.dev/gemini-api/terms#grounding-with-google-search) or Vertex AI Gemini API (see [Service Terms](https://cloud.google.com/terms/service-terms) section within the Service Specific Terms). |
82
81
|[GroundingChunk](./ai.groundingchunk.md#groundingchunk_interface)| Represents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled. |
83
82
|[GroundingMetadata](./ai.groundingmetadata.md#groundingmetadata_interface)| Metadata returned when grounding is enabled.<!---->Currently, only Grounding with Google Search is supported (see [GoogleSearchTool](./ai.googlesearchtool.md#googlesearchtool_interface)<!---->).<!---->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: [Gemini Developer API](https://ai.google.dev/gemini-api/terms#grounding-with-google-search) or Vertex AI Gemini API (see [Service Terms](https://cloud.google.com/terms/service-terms) section within the Service Specific Terms). |
84
83
|[GroundingSupport](./ai.groundingsupport.md#groundingsupport_interface)| Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks. |
|[ImagenGCSImage](./ai.imagengcsimage.md#imagengcsimage_interface)| An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.<!---->This feature is not available yet. |
86
86
|[ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface)| <b><i>(Public Preview)</i></b> Configuration options for generating images with Imagen.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images-imagen) for more details. |
87
87
|[ImagenGenerationResponse](./ai.imagengenerationresponse.md#imagengenerationresponse_interface)| <b><i>(Public Preview)</i></b> The response from a request to generate images with Imagen. |
|[ModalityTokenCount](./ai.modalitytokencount.md#modalitytokencount_interface)| Represents token counting info for a single modality. |
98
-
|[ModelParams](./ai.modelparams.md#modelparams_interface)| Params passed to [getGenerativeModel()](./ai.md#getgenerativemodel_80bd839)<!---->. |
98
+
|[ModelParams](./ai.modelparams.md#modelparams_interface)| Params passed to [getGenerativeModel()](./ai.md#getgenerativemodel_c63f46a)<!---->. |
99
99
|[ObjectSchemaRequest](./ai.objectschemarequest.md#objectschemarequest_interface)| Interface for JSON parameters in a schema of "object" when not using the <code>Schema.object()</code> helper. |
100
100
|[OnDeviceParams](./ai.ondeviceparams.md#ondeviceparams_interface)| Encapsulates configuration for on-device inference. |
101
101
|[PromptFeedback](./ai.promptfeedback.md#promptfeedback_interface)| If the prompt was blocked, this will be populated with <code>blockReason</code> and the relevant <code>safetyRatings</code>. |
@@ -111,7 +111,7 @@ The Firebase AI Web SDK.
111
111
|[Segment](./ai.segment.md#segment_interface)| Represents a specific segment within a [Content](./ai.content.md#content_interface) object, often used to pinpoint the exact location of text or data that grounding information refers to. |
112
112
|[StartChatParams](./ai.startchatparams.md#startchatparams_interface)| Params for [GenerativeModel.startChat()](./ai.generativemodel.md#generativemodelstartchat)<!---->. |
113
113
|[TextPart](./ai.textpart.md#textpart_interface)| Content part interface if the part represents a text string. |
114
-
|[ThinkingConfig](./ai.thinkingconfig.md#thinkingconfig_interface)|Configuration for "thinking" behavior of compatible Gemini models.<!---->Certain models utilize a thinking process before generating a response. This allows them to reason through complex problems and plan a more coherent and accurate answer.|
|[ToolConfig](./ai.toolconfig.md#toolconfig_interface)| Tool config. This config is shared for all tools provided in the request. |
116
116
|[UsageMetadata](./ai.usagemetadata.md#usagemetadata_interface)| Usage metadata about a [GenerateContentResponse](./ai.generatecontentresponse.md#generatecontentresponse_interface)<!---->. |
117
117
|[VideoMetadata](./ai.videometadata.md#videometadata_interface)| Describes the input video content. |
@@ -157,6 +157,10 @@ The Firebase AI Web SDK.
157
157
|[ImagenAspectRatio](./ai.md#imagenaspectratio)| <b><i>(Public Preview)</i></b> Aspect ratios for Imagen images.<!---->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your [ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface)<!---->.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details and examples of the supported aspect ratios. |
158
158
|[ImagenPersonFilterLevel](./ai.md#imagenpersonfilterlevel)| <b><i>(Public Preview)</i></b> A filter level controlling whether generation of images containing people or faces is allowed.<!---->See the <ahref="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details. |
159
159
|[ImagenSafetyFilterLevel](./ai.md#imagensafetyfilterlevel)| <b><i>(Public Preview)</i></b> A filter level controlling how aggressively to filter sensitive content.<!---->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) and the [Responsible AI and usage guidelines](https://cloud.google.com/vertex-ai/generative-ai/docs/image/responsible-ai-imagen#safety-filters) for more details. |
160
+
|[InferenceMode](./ai.md#inferencemode)| Determines whether inference happens on-device or in-cloud. |
Configuration for "thinking" behavior of compatible Gemini models.
14
-
15
-
Certain models utilize a thinking process before generating a response. This allows them to reason through complex problems and plan a more coherent and accurate answer.
0 commit comments