You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Good day!
I'm trying to make LocalAI and Dify work together in reranker mode using the lb-reranker-0.5b-v1.0 model.
LocalAI stubbornly says that --rerankingand without--embedding` cannot work together, and I don't understand where to switch this flag?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Good day!
I'm trying to make LocalAI and Dify work together in reranker mode using the lb-reranker-0.5b-v1.0 model.
LocalAI stubbornly says that --reranking
and without
--embedding` cannot work together, and I don't understand where to switch this flag?Yaml
services:
api:
image: localai/localai:latest-aio-gpu-nvidia-cuda-12
build:
context: .
dockerfile: Dockerfile
args:
- IMAGE_TYPE=python
- BASE_IMAGE=ubuntu:22.04
ports:
- "8085:8085"
env_file:
- .env
environment:
- MODELS_PATH=/models
- DEBUG=true
volumes:
- ./models:/models:cached
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
command:
- lb-reranker-0.5b-v1.0
- jina-reranker-v1-base-en
Logs
api-1 | NVIDIA GPU detected. Attempting to find memory size...
api-1 | Total GPU Memory: 24564 MiB
api-1 | ===> Starting LocalAI[gpu-8g] with the following models: /aio/gpu-8g/embeddings.yaml,/aio/gpu-8g/rerank.yaml,/aio/gpu-8g/text-to-speech.yaml,/aio/gpu-8g/image-gen.yaml,/aio/gpu-8g/text-to-text.yaml,/aio/gpu-8g/speech-to-text.yaml,/aio/gpu-8g/vad.yaml,/aio/gpu-8g/vision.yaml
api-1 | CPU info:
api-1 | model name : 13th Gen Intel(R) Core(TM) i9-13900
api-1 | flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
api-1 | CPU: AVX found OK
api-1 | CPU: AVX2 found OK
api-1 | CPU: no AVX512 found
api-1 | 1:06PM DBG Setting logging to debug
api-1 | 1:06PM INF Starting LocalAI using 16 threads, with models path: /models
api-1 | 1:06PM INF LocalAI version: v3.1.1 (cd2b0c0)
api-1 | 1:06PM DBG CPU capabilities: [3dnowprefetch abm acpi adx aes aperfmperf apic arat arch_capabilities arch_lbr arch_perfmon art avx avx2 avx_vnni bmi1 bmi2 bts clflush clflushopt clwb cmov constant_tsc cpuid cpuid_fault cx16 cx8 de ds_cpl dtes64 dtherm dts ept ept_ad erms est f16c flexpriority flush_l1d fma fpu fsgsbase fsrm fxsr gfni hfi ht hwp hwp_act_window hwp_epp hwp_notify hwp_pkg_req ibpb ibrs ibrs_enhanced ibt ida intel_pt invpcid lahf_lm lm mca mce md_clear mmx monitor movbe movdir64b movdiri msr mtrr nonstop_tsc nopl nx ospke pae pat pbe pclmulqdq pconfig pdcm pdpe1gb pebs pge pku pln pni popcnt pse pse36 pts rdpid rdrand rdseed rdtscp rep_good sdbg sep serialize sha_ni smap smep smx split_lock_detect ss ssbd sse sse2 sse4_1 sse4_2 ssse3 stibp syscall tm tm2 tme tpr_shadow tsc tsc_adjust tsc_deadline_timer tsc_known_freq umip vaes vme vmx vnmi vpclmulqdq vpid waitpkg x2apic xgetbv1 xsave xsavec xsaveopt xsaves xtopology xtpr]
api-1 | 1:06PM DBG GPU count: 2
api-1 | 1:06PM DBG GPU: card #0 @0000:01:00.0 -> driver: 'nvidia' class: 'Display controller' vendor: 'NVIDIA Corporation' product: 'GA102GL [RTX A5000]'
api-1 | 1:06PM DBG GPU: card #1 @0000:00:02.0 -> driver: 'i915' class: 'Display controller' vendor: 'Intel Corporation' product: 'unknown'
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/embeddings.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/rerank.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/text-to-speech.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/image-gen.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/text-to-text.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/speech-to-text.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/vad.yaml
api-1 | 1:06PM DBG [startup] resolved local model: /aio/gpu-8g/vision.yaml
api-1 | 1:06PM INF installing model license=apache-2.0 model=lb-reranker-0.5b-v1.0
api-1 | 1:06PM DBG Config overrides map[parameters:map[model:lb-reranker-0.5B-v1.0-Q4_K_M.gguf]]
api-1 | 1:06PM DBG Checking "lb-reranker-0.5B-v1.0-Q4_K_M.gguf" exists and matches SHA
api-1 | 1:06PM DBG File "/models/lb-reranker-0.5B-v1.0-Q4_K_M.gguf" already exists and matches the SHA. Skipping download
api-1 | 1:06PM DBG Written config file /models/lb-reranker-0.5b-v1.0.yaml
api-1 | 1:06PM DBG Written gallery file /models/._gallery_lb-reranker-0.5b-v1.0.yaml
api-1 | 1:06PM WRN [startup] failed resolving model 'jina-reranker-v1-base-en'
api-1 | 1:06PM ERR error installing models error="failed resolving model 'jina-reranker-v1-base-en'"
api-1 | 1:06PM DBG GPU vendor gpuVendor=nvidia
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG Model file loaded: granite-embedding-107m-multilingual-f16.gguf architecture=bert bosTokenID=0 eosTokenID=2 modelName="Granite Embedding 107m Multilingual"
api-1 | 1:06PM DBG guessDefaultsFromFile: family not identified
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG Model file loaded: stable-diffusion-v1-5-pruned-emaonly-Q4_0.gguf architecture=diffusion bosTokenID=-1 eosTokenID=-1 modelName=
api-1 | 1:06PM DBG guessDefaultsFromFile: family not identified
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG guessDefaultsFromFile: template already set name=gpt-4o
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG Model file loaded: granite-embedding-107m-multilingual-f16.gguf architecture=bert bosTokenID=0 eosTokenID=2 modelName="Granite Embedding 107m Multilingual"
api-1 | 1:06PM DBG guessDefaultsFromFile: family not identified
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG guessDefaultsFromFile: template already set name=gpt-4o
api-1 | 1:06PM ERR guessDefaultsFromFile: panic while parsing gguf file
api-1 | 1:06PM ERR guessDefaultsFromFile: panic while parsing gguf file
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG guessDefaultsFromFile: template already set name=gpt-4
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG guessDefaultsFromFile: template already set name=lb-reranker-0.5b-v1.0
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG Model file loaded: lb-reranker-0.5B-v1.0-Q4_K_M.gguf architecture=qwen2 bosTokenID=151643 eosTokenID=151645 modelName=Reranker_0.5_Cont_Filt_7Max
api-1 | 1:06PM DBG guessDefaultsFromFile: guessed template {Chat:{{.Input -}}
api-1 | <|im_start|>assistant ChatMessage:<|im_start|>{{ .RoleName }}
api-1 | {{ if .FunctionCall -}}
api-1 | Function call:
api-1 | {{ else if eq .RoleName "tool" -}}
api-1 | Function response:
api-1 | {{ end -}}
api-1 | {{ if .Content -}}
api-1 | {{.Content }}
api-1 | {{ end -}}
api-1 | {{ if .FunctionCall -}}
api-1 | {{toJson .FunctionCall}}
api-1 | {{ end -}}<|im_end|> Completion: Edit: Functions:<|im_start|>system
api-1 | You are a function calling AI model. You are provided with functions to execute. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
api-1 | {{range .Functions}}
api-1 | {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
api-1 | {{end}}
api-1 | For each function call return a json object with function name and arguments
api-1 | <|im_end|>
api-1 | {{.Input -}}
api-1 | <|im_start|>assistant UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} family=4
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG guessDefaultsFromFile: template already set name=qwen3-0.6b
api-1 | 1:06PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:06PM DBG guessDefaultsFromFile: template already set name=qwen3-embedding-4b
api-1 | 1:06PM INF Preloading models from /models
api-1 |
api-1 | Model name: qwen3-0.6b
api-1 |
api-1 |
api-1 |
api-1 | Model name: qwen3-embedding-4b
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "DreamShaper_8_pruned.safetensors" exists and matches SHA
api-1 | 1:06PM DBG File "/models/DreamShaper_8_pruned.safetensors" already exists. Skipping download
api-1 |
api-1 | Model name: stablediffusion
api-1 |
api-1 |
api-1 |
api-1 | curl http://localhost:8080/v1/images/generations -H "Content-Type:
api-1 | application/json" -d '{ "prompt": "|", "step": 25, "size": "512x512" }'
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "voice-en-us-amy-low.tar.gz" exists and matches SHA
api-1 | 1:06PM DBG File "/models/voice-en-us-amy-low.tar.gz" already exists. Skipping download
api-1 |
api-1 | Model name: tts-1
api-1 |
api-1 |
api-1 |
api-1 | To test if this model works as expected, you can use the following curl
api-1 | command:
api-1 |
api-1 | curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{
api-1 | "model":"tts-1", "input": "Hi, this is a test." }'
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "silero-vad.onnx" exists and matches SHA
api-1 | 1:06PM DBG File "/models/silero-vad.onnx" already exists and matches the SHA. Skipping download
api-1 |
api-1 | Model name: silero-vad
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "jina-reranker-v1-tiny-en.f16.gguf" exists and matches SHA
api-1 | 1:06PM DBG File "/models/jina-reranker-v1-tiny-en.f16.gguf" already exists and matches the SHA. Skipping download
api-1 |
api-1 | Model name: jina-reranker-v1-base-en
api-1 |
api-1 |
api-1 |
api-1 | You can test this model with curl like this:
api-1 |
api-1 | curl http://localhost:8080/v1/rerank -H "Content-Type: application/json" -d
api-1 | '{ "model": "jina-reranker-v1-base-en", "query": "Organic skincare products for
api-1 | sensitive skin", "documents": [ "Eco-friendly kitchenware for modern homes",
api-1 | "Biodegradable cleaning supplies for eco-conscious consumers", "Organic
api-1 | cotton baby clothes for sensitive skin", "Natural organic skincare range for
api-1 | sensitive skin", "Tech gadgets for smart homes: 2024 edition", "Sustainable
api-1 | gardening tools and compost solutions", "Sensitive skin-friendly facial
api-1 | cleansers and toners", "Organic food wraps and storage solutions", "All-
api-1 | natural pet food for dogs with allergies", "Yoga mats made from recycled
api-1 | materials" ], "top_n": 3 }'
api-1 |
api-1 |
api-1 |
api-1 | Model name: lb-reranker-0.5b-v1.0
api-1 |
api-1 |
api-1 |
api-1 | Model name: text-embedding-ada-002
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "ggml-whisper-base.bin" exists and matches SHA
api-1 |
api-1 | You can test this model with curl like this:
api-1 |
api-1 | curl http://localhost:8080/embeddings -X POST -H "Content-Type:
api-1 | application/json" -d '{ "input": "Your text string goes here", "model": "text-
api-1 | embedding-ada-002" }'
api-1 |
api-1 |
api-1 | 1:06PM DBG File "/models/ggml-whisper-base.bin" already exists and matches the SHA. Skipping download
api-1 |
api-1 | Model name: whisper-1
api-1 |
api-1 |
api-1 |
api-1 | ## example audio file
api-1 |
api-1 | wget --quiet --show-progress -O gb1.ogg
api-1 | https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg
api-1 |
api-1 | ## Send the example audio file to the transcriptions endpoint
api-1 |
api-1 | curl http://localhost:8080/v1/audio/transcriptions -H "Content-Type:
api-1 | multipart/form-data" -F file="@$PWD/gb1.ogg" -F model="whisper-1"
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "minicpm-v-2_6-Q4_K_M.gguf" exists and matches SHA
api-1 | 1:06PM DBG File "/models/minicpm-v-2_6-Q4_K_M.gguf" already exists and matches the SHA. Skipping download
api-1 | 1:06PM DBG Checking "minicpm-v-2_6-mmproj-f16.gguf" exists and matches SHA
api-1 | 1:06PM DBG File "/models/minicpm-v-2_6-mmproj-f16.gguf" already exists and matches the SHA. Skipping download
api-1 |
api-1 | Model name: gpt-4o
api-1 |
api-1 |
api-1 | 1:06PM DBG Checking "Hermes-3-Llama-3.2-3B-Q4_K_M.gguf" exists and matches SHA
api-1 | 1:06PM DBG File "/models/Hermes-3-Llama-3.2-3B-Q4_K_M.gguf" already exists and matches the SHA. Skipping download
api-1 |
api-1 | Model name: gpt-4
api-1 |
api-1 |
api-1 | 1:06PM DBG Model: gpt-4 (config: {PredictionOptions:{BasicModelRequest:{Model:Hermes-3-Llama-3.2-3B-Q4_K_M.gguf} Language: Translate:false N:0 TopP:0xc0ab595ca0 TopK:0xc0ab595ca8 Temperature:0xc0ab595cb0 Maxtokens:0xc0ab595ce0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0ab595cd8 TypicalP:0xc0ab595cd0 Seed:0xc0ab595cf0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:gpt-4 F16:0xc0ab595c58 Threads:0xc0ab595c90 Debug:0xc0ab595ce8 Roles:map[] Embeddings:0xc0ab595ce9 Backend: TemplateConfig:{Chat:<|begin_of_text|><|start_header_id|>system<|end_header_id|>
api-1 | You are a helpful assistant<|eot_id|><|start_header_id|>user<|end_header_id|>
api-1 | {{.Input }}
api-1 | <|start_header_id|>assistant<|end_header_id|>
api-1 | ChatMessage:<|start_header_id|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}}<|end_header_id|>
api-1 | {{ if .FunctionCall -}}
api-1 | {{ else if eq .RoleName "tool" -}}
api-1 | The Function was executed and the response was:
api-1 | {{ end -}}
api-1 | {{ if .Content -}}
api-1 | {{.Content -}}
api-1 | {{ else if .FunctionCall -}}
api-1 | {{ range .FunctionCall }}
api-1 | [{{.FunctionCall.Name}}({{.FunctionCall.Arguments}})]
api-1 | {{ end }}
api-1 | {{ end -}}
api-1 | <|eot_id|>
api-1 | Completion:{{.Input}}
api-1 | Edit: Functions:<|start_header_id|>system<|end_header_id|>
api-1 | You are an expert in composing functions. You are given a question and a set of possible functions.
api-1 | Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
api-1 | If none of the functions can be used, point it out. If the given question lacks the parameters required by the function, also point it out. You should only return the function call in tools call sections.
api-1 | If you decide to invoke any of the function(s), you MUST put it in the format as follows:
api-1 | [func_name1(params_name1=params_value1,params_name2=params_value2,...),func_name2(params_name1=params_value1,params_name2=params_value2,...)]
api-1 | You SHOULD NOT include any other text in the response.
api-1 | Here is a list of functions in JSON format that you can invoke.
api-1 | {{toJson .Functions}}
api-1 | <|eot_id|><|start_header_id|>user<|end_header_id|>
api-1 | {{.Input}}
api-1 | <|eot_id|><|start_header_id|>assistant<|end_header_id|>
api-1 | UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_CHAT FLAG_COMPLETION FLAG_ANY] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:true NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType:llama3.1 GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[<function=(?P\w+)>(?P.*)] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0ab595cc8 MirostatTAU:0xc0ab595cc0 Mirostat:0xc0ab595cb8 NGPULayers:0xc0ae212d98 MMap:0xc0ab595c59 MMlock:0xc0ab595ce9 LowVRAM:0xc0ab595ce9 Reranking:0xc0ab595ce9 Grammar: StopWords:[<|im_end|> <|eot_id|> <|end_of_text|>] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0ab595c48 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:Hermes-3-Llama-3.2-3B-Q4_K_M.gguf SHA256:2e220a14ba4328fee38cf36c2c068261560f999fadb5725ce5c6d977cb5126b5 URI:huggingface://bartowski/Hermes-3-Llama-3.2-3B-GGUF/Hermes-3-Llama-3.2-3B-Q4_K_M.gguf}] Description: Usage: Options:[gpu]})
api-1 | 1:06PM DBG Model: gpt-4o (config: {PredictionOptions:{BasicModelRequest:{Model:minicpm-v-2_6-Q4_K_M.gguf} Language: Translate:false N:0 TopP:0xc0a94cb168 TopK:0xc0a94cb170 Temperature:0xc0a94cb178 Maxtokens:0xc0a94cb1a8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0a94cb1a0 TypicalP:0xc0a94cb198 Seed:0xc0a94cb1b8 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:gpt-4o F16:0xc0a94cb140 Threads:0xc0a94cb158 Debug:0xc0a94cb1b0 Roles:map[] Embeddings:0xc0a94cb1b1 Backend: TemplateConfig:{Chat:{{.Input -}}
api-1 | <|im_start|>assistant
api-1 | ChatMessage:<|im_start|>{{ .RoleName }}
api-1 | {{ if .FunctionCall -}}
api-1 | Function call:
api-1 | {{ else if eq .RoleName "tool" -}}
api-1 | Function response:
api-1 | {{ end -}}
api-1 | {{ if .Content -}}
api-1 | {{.Content }}
api-1 | {{ end -}}
api-1 | {{ if .FunctionCall -}}
api-1 | {{toJson .FunctionCall}}
api-1 | {{ end -}}<|im_end|>
api-1 | Completion:{{.Input}}
api-1 | Edit: Functions:<|im_start|>system
api-1 | You are a function calling AI model. You are provided with functions to execute. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
api-1 | {{range .Functions}}
api-1 | {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
api-1 | {{end}}
api-1 | For each function call return a json object with function name and arguments
api-1 | <|im_end|>
api-1 | {{.Input -}}
api-1 | <|im_start|>assistant
api-1 | UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY FLAG_COMPLETION FLAG_CHAT] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0a94cb190 MirostatTAU:0xc0a94cb188 Mirostat:0xc0a94cb180 NGPULayers:0xc0ab594cb8 MMap:0xc0a94cb141 MMlock:0xc0a94cb1b1 LowVRAM:0xc0a94cb1b1 Reranking:0xc0a94cb1b1 Grammar: StopWords:[<|im_end|> <|endoftext|>] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0a94cb130 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj:minicpm-v-2_6-mmproj-f16.gguf FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:minicpm-v-2_6-Q4_K_M.gguf SHA256:3a4078d53b46f22989adbf998ce5a3fd090b6541f112d7e936eb4204a04100b1 URI:huggingface://openbmb/MiniCPM-V-2_6-gguf/ggml-model-Q4_K_M.gguf} {Filename:minicpm-v-2_6-mmproj-f16.gguf SHA256:4485f68a0f1aa404c391e788ea88ea653c100d8e98fe572698f701e5809711fd URI:huggingface://openbmb/MiniCPM-V-2_6-gguf/mmproj-model-f16.gguf}] Description: Usage: Options:[gpu]})
api-1 | 1:06PM DBG Model: jina-reranker-v1-base-en (config: {PredictionOptions:{BasicModelRequest:{Model:jina-reranker-v1-tiny-en.f16.gguf} Language: Translate:false N:0 TopP:0xc0ab595540 TopK:0xc0ab595548 Temperature:0xc0ab595550 Maxtokens:0xc0ab595580 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0ab595578 TypicalP:0xc0ab595570 Seed:0xc0ab595590 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:jina-reranker-v1-base-en F16:0xc0ab595526 Threads:0xc0ab595530 Debug:0xc0ab595588 Roles:map[] Embeddings:0xc0ab595589 Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0ab595568 MirostatTAU:0xc0ab595560 Mirostat:0xc0ab595558 NGPULayers: MMap:0xc0ab595588 MMlock:0xc0ab595589 LowVRAM:0xc0ab595589 Reranking:0xc0ab595525 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0ab595598 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:jina-reranker-v1-tiny-en.f16.gguf SHA256:5f696cf0d0f3d347c4a279eee8270e5918554cdac0ed1f632f2619e4e8341407 URI:huggingface://mradermacher/jina-reranker-v1-tiny-en-GGUF/jina-reranker-v1-tiny-en.f16.gguf}] Description: Usage:You can test this model with curl like this:
api-1 |
api-1 | curl http://localhost:8080/v1/rerank
api-1 | -H "Content-Type: application/json"
api-1 | -d '{
api-1 | "model": "jina-reranker-v1-base-en",
api-1 | "query": "Organic skincare products for sensitive skin",
api-1 | "documents": [
api-1 | "Eco-friendly kitchenware for modern homes",
api-1 | "Biodegradable cleaning supplies for eco-conscious consumers",
api-1 | "Organic cotton baby clothes for sensitive skin",
api-1 | "Natural organic skincare range for sensitive skin",
api-1 | "Tech gadgets for smart homes: 2024 edition",
api-1 | "Sustainable gardening tools and compost solutions",
api-1 | "Sensitive skin-friendly facial cleansers and toners",
api-1 | "Organic food wraps and storage solutions",
api-1 | "All-natural pet food for dogs with allergies",
api-1 | "Yoga mats made from recycled materials"
api-1 | ],
api-1 | "top_n": 3
api-1 | }'
api-1 | Options:[]})
api-1 | 1:06PM DBG Model: lb-reranker-0.5b-v1.0 (config: {PredictionOptions:{BasicModelRequest:{Model:lb-reranker-0.5B-v1.0-Q4_K_M.gguf} Language: Translate:false N:0 TopP:0xc0b02a3178 TopK:0xc0b02a3180 Temperature:0xc0b02a3188 Maxtokens:0xc0b02a31b8 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0b02a31b0 TypicalP:0xc0b02a31a8 Seed:0xc0b02a31c8 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:lb-reranker-0.5b-v1.0 F16:0xc0b02a3170 Threads:0xc0b02a3168 Debug:0xc0b02a31c0 Roles:map[] Embeddings:0xc0b02a31c1 Backend: TemplateConfig:{Chat:{{.Input -}}
api-1 | <|im_start|>assistant ChatMessage:<|im_start|>{{ .RoleName }}
api-1 | {{ if .FunctionCall -}}
api-1 | Function call:
api-1 | {{ else if eq .RoleName "tool" -}}
api-1 | Function response:
api-1 | {{ end -}}
api-1 | {{ if .Content -}}
api-1 | {{.Content }}
api-1 | {{ end -}}
api-1 | {{ if .FunctionCall -}}
api-1 | {{toJson .FunctionCall}}
api-1 | {{ end -}}<|im_end|> Completion: Edit: Functions:<|im_start|>system
api-1 | You are a function calling AI model. You are provided with functions to execute. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
api-1 | {{range .Functions}}
api-1 | {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
api-1 | {{end}}
api-1 | For each function call return a json object with function name and arguments
api-1 | <|im_end|>
api-1 | {{.Input -}}
api-1 | <|im_start|>assistant UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0b02a31a0 MirostatTAU:0xc0b02a3198 Mirostat:0xc0b02a3190 NGPULayers:0xc0b2334348 MMap:0xc0b02a31c0 MMlock:0xc0b02a31c1 LowVRAM:0xc0b02a31c1 Reranking:0xc0b02a31c1 Grammar: StopWords:[<|im_end|> ] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0b2334340 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[] Description: Usage: Options:[gpu]})
api-1 | 1:06PM DBG Model: qwen3-0.6b (config: {PredictionOptions:{BasicModelRequest:{Model:Qwen3-0.6B.Q4_K_M.gguf} Language: Translate:false N:0 TopP:0xc0b2334780 TopK:0xc0b2334788 Temperature:0xc0b2334790 Maxtokens:0xc0b23347c0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0b23347b8 TypicalP:0xc0b23347b0 Seed:0xc0b23347d0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:qwen3-0.6b F16:0xc0b2334760 Threads:0xc0b2334770 Debug:0xc0b23347c8 Roles:map[] Embeddings:0xc0b23347c9 Backend: TemplateConfig:{Chat:{{.Input -}}
api-1 | <|im_start|>assistant
api-1 | ChatMessage:<|im_start|>{{ .RoleName }}
api-1 | {{ if .FunctionCall -}}
api-1 | {{ else if eq .RoleName "tool" -}}
api-1 | {{ end -}}
api-1 | {{ if .Content -}}
api-1 | {{.Content }}
api-1 | {{ end -}}
api-1 | {{ if .FunctionCall -}}
api-1 | {{toJson .FunctionCall}}
api-1 | {{ end -}}<|im_end|>
api-1 | Completion:{{.Input}}
api-1 | Edit: Functions:<|im_start|>system
api-1 | You are a function calling AI model. You are provided with functions to execute. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
api-1 | {{range .Functions}}
api-1 | {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
api-1 | {{end}}
api-1 | For each function call return a json object with function name and arguments
api-1 | <|im_end|>
api-1 | {{.Input -}}
api-1 | <|im_start|>assistant
api-1 | UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_CHAT FLAG_COMPLETION FLAG_ANY] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0b23347a8 MirostatTAU:0xc0b23347a0 Mirostat:0xc0b2334798 NGPULayers:0xc0b43b7808 MMap:0xc0b2334761 MMlock:0xc0b23347c9 LowVRAM:0xc0b23347c9 Reranking:0xc0b23347c9 Grammar: StopWords:[<|im_end|> <|endoftext|>] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0b2334750 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[] Description: Usage: Options:[gpu]})
api-1 | 1:06PM DBG Model: qwen3-embedding-4b (config: {PredictionOptions:{BasicModelRequest:{Model:Qwen3-Embedding-4B-Q4_K_M.gguf} Language: Translate:false N:0 TopP:0xc0b43b7ac0 TopK:0xc0b43b7ac8 Temperature:0xc0b43b7ad0 Maxtokens:0xc0b43b7b00 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0b43b7af8 TypicalP:0xc0b43b7af0 Seed:0xc0b43b7b10 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:qwen3-embedding-4b F16:0xc0b43b7aa1 Threads:0xc0b43b7ab0 Debug:0xc0b43b7b08 Roles:map[] Embeddings:0xc0b43b7aa0 Backend: TemplateConfig:{Chat:{{.Input -}}
api-1 | <|im_start|>assistant
api-1 | ChatMessage:<|im_start|>{{ .RoleName }}
api-1 | {{ if .FunctionCall -}}
api-1 | {{ else if eq .RoleName "tool" -}}
api-1 | {{ end -}}
api-1 | {{ if .Content -}}
api-1 | {{.Content }}
api-1 | {{ end -}}
api-1 | {{ if .FunctionCall -}}
api-1 | {{toJson .FunctionCall}}
api-1 | {{ end -}}<|im_end|>
api-1 | Completion:{{.Input}}
api-1 | Edit: Functions:<|im_start|>system
api-1 | You are a function calling AI model. You are provided with functions to execute. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
api-1 | {{range .Functions}}
api-1 | {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
api-1 | {{end}}
api-1 | For each function call return a json object with function name and arguments
api-1 | <|im_end|>
api-1 | {{.Input -}}
api-1 | <|im_start|>assistant
api-1 | UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY FLAG_CHAT FLAG_COMPLETION FLAG_EMBEDDINGS] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0b43b7ae8 MirostatTAU:0xc0b43b7ae0 Mirostat:0xc0b43b7ad8 NGPULayers:0xc0b63fcd78 MMap:0xc0b43b7aa2 MMlock:0xc0b43b7b09 LowVRAM:0xc0b43b7b09 Reranking:0xc0b43b7b09 Grammar: StopWords:[<|im_end|> <|endoftext|>] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0b43b7a90 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[] Description: Usage: Options:[gpu]})
api-1 | 1:06PM DBG Model: silero-vad (config: {PredictionOptions:{BasicModelRequest:{Model:silero-vad.onnx} Language: Translate:false N:0 TopP:0xc0ab594eb0 TopK:0xc0ab594eb8 Temperature:0xc0ab594ec0 Maxtokens:0xc0ab594ef0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0ab594ee8 TypicalP:0xc0ab594ee0 Seed:0xc0ab594f00 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:silero-vad F16:0xc0ab594ea8 Threads:0xc0ab594ea0 Debug:0xc0ab594ef8 Roles:map[] Embeddings:0xc0ab594ef9 Backend:silero-vad TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY FLAG_VAD] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0ab594ed8 MirostatTAU:0xc0ab594ed0 Mirostat:0xc0ab594ec8 NGPULayers: MMap:0xc0ab594ef8 MMlock:0xc0ab594ef9 LowVRAM:0xc0ab594ef9 Reranking:0xc0ab594ef9 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0ab594f08 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:silero-vad.onnx SHA256:a4a068cd6cf1ea8355b84327595838ca748ec29a25bc91fc82e6c299ccdc5808 URI:https://huggingface.co/onnx-community/silero-vad/resolve/main/onnx/model.onnx}] Description: Usage: Options:[]})
api-1 | 1:06PM DBG Model: stablediffusion (config: {PredictionOptions:{BasicModelRequest:{Model:DreamShaper_8_pruned.safetensors} Language: Translate:false N:0 TopP:0xc0a94cae80 TopK:0xc0a94cae88 Temperature:0xc0a94cae90 Maxtokens:0xc0a94caec0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0a94caeb8 TypicalP:0xc0a94caeb0 Seed:0xc0a94caed0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:stablediffusion F16:0xc0a94cae15 Threads:0xc0a94cae70 Debug:0xc0a94caec8 Roles:map[] Embeddings:0xc0a94caec9 Backend:diffusers TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_VIDEO FLAG_ANY FLAG_IMAGE] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0a94caea8 MirostatTAU:0xc0a94caea0 Mirostat:0xc0a94cae98 NGPULayers: MMap:0xc0a94caec8 MMlock:0xc0a94caec9 LowVRAM:0xc0a94caec9 Reranking:0xc0a94caec9 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0a94caed8 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:true PipelineType:StableDiffusionPipeline SchedulerType:k_dpmpp_2m EnableParameters:negative_prompt,num_inference_steps IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:25 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:DreamShaper_8_pruned.safetensors SHA256: URI:huggingface://Lykon/DreamShaper/DreamShaper_8_pruned.safetensors}] Description: Usage:curl http://localhost:8080/v1/images/generations
api-1 | -H "Content-Type: application/json"
api-1 | -d '{
api-1 | "prompt": "|",
api-1 | "step": 25,
api-1 | "size": "512x512"
api-1 | }' Options:[]})
api-1 | 1:06PM DBG Model: text-embedding-ada-002 (config: {PredictionOptions:{BasicModelRequest:{Model:granite-embedding-107m-multilingual-f16.gguf} Language: Translate:false N:0 TopP:0xc0a6daee00 TopK:0xc0a6daee08 Temperature:0xc0a6daee10 Maxtokens:0xc0a6daee40 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0a6daee38 TypicalP:0xc0a6daee30 Seed:0xc0a6daee50 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:text-embedding-ada-002 F16:0xc0a6daedf8 Threads:0xc0a6daedf0 Debug:0xc0a6daee48 Roles:map[] Embeddings:0xc0a6daeded Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY FLAG_EMBEDDINGS] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0a6daee28 MirostatTAU:0xc0a6daee20 Mirostat:0xc0a6daee18 NGPULayers:0xc0a94ca568 MMap:0xc0a6daee48 MMlock:0xc0a6daee49 LowVRAM:0xc0a6daee49 Reranking:0xc0a6daee49 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0a94ca560 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[] Description: Usage:You can test this model with curl like this:
api-1 |
api-1 | curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json" -d '{
api-1 | "input": "Your text string goes here",
api-1 | "model": "text-embedding-ada-002"
api-1 | }' Options:[gpu]})
api-1 | 1:06PM DBG Model: tts-1 (config: {PredictionOptions:{BasicModelRequest:{Model:en-us-amy-low.onnx} Language: Translate:false N:0 TopP:0xc0ab5959d0 TopK:0xc0ab5959d8 Temperature:0xc0ab5959e0 Maxtokens:0xc0ab595a10 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0ab595a08 TypicalP:0xc0ab595a00 Seed:0xc0ab595a20 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:tts-1 F16:0xc0ab5959c8 Threads:0xc0ab5959c0 Debug:0xc0ab595a18 Roles:map[] Embeddings:0xc0ab595a19 Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0ab5959f8 MirostatTAU:0xc0ab5959f0 Mirostat:0xc0ab5959e8 NGPULayers: MMap:0xc0ab595a18 MMlock:0xc0ab595a19 LowVRAM:0xc0ab595a19 Reranking:0xc0ab595a19 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0ab595a28 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:voice-en-us-amy-low.tar.gz SHA256: URI:https://github.com/rhasspy/piper/releases/download/v0.0.2/voice-en-us-amy-low.tar.gz}] Description: Usage:To test if this model works as expected, you can use the following curl command:
api-1 |
api-1 | curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{
api-1 | "model":"tts-1",
api-1 | "input": "Hi, this is a test."
api-1 | }' Options:[]})
api-1 | 1:06PM DBG Model: whisper-1 (config: {PredictionOptions:{BasicModelRequest:{Model:ggml-whisper-base.bin} Language: Translate:false N:0 TopP:0xc0a94cac00 TopK:0xc0a94cac08 Temperature:0xc0a94cac10 Maxtokens:0xc0a94cac40 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0a94cac38 TypicalP:0xc0a94cac30 Seed:0xc0a94cac50 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 ClipSkip:0 Tokenizer:} Name:whisper-1 F16:0xc0a94cabf8 Threads:0xc0a94cabf0 Debug:0xc0a94cac48 Roles:map[] Embeddings:0xc0a94cac49 Backend:whisper TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter: Multimodal: JinjaTemplate:false ReplyPrefix:} KnownUsecaseStrings:[FLAG_ANY FLAG_TRANSCRIPT] KnownUsecases: Pipeline:{TTS: LLM: Transcription: VAD:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType: GrammarTriggers:[]} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ArgumentRegex:[] ArgumentRegexKey: ArgumentRegexValue: ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0a94cac28 MirostatTAU:0xc0a94cac20 Mirostat:0xc0a94cac18 NGPULayers: MMap:0xc0a94cac48 MMlock:0xc0a94cac49 LowVRAM:0xc0a94cac49 Reranking:0xc0a94cac49 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0a94cac58 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 DisableLogStatus:false DType: LimitMMPerPrompt:{LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0} MMProj: FlashAttention:false NoKVOffloading:false CacheTypeK: CacheTypeV: RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 CFGScale:0} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: AudioPath:} CUDA:false DownloadFiles:[{Filename:ggml-whisper-base.bin SHA256:60ed5bc3dd14eea856493d334349b405782ddcaf0028d4b5df4088345fba2efe URI:https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.bin}] Description: Usage:## example audio file
api-1 | wget --quiet --show-progress -O gb1.ogg https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg
api-1 |
api-1 | ## Send the example audio file to the transcriptions endpoint
api-1 | curl http://localhost:8080/v1/audio/transcriptions
api-1 | -H "Content-Type: multipart/form-data"
api-1 | -F file="@$PWD/gb1.ogg" -F model="whisper-1"
api-1 | Options:[]})
api-1 | 1:06PM DBG Extracting backend assets files to /tmp/localai/backend_data
api-1 | 1:06PM DBG processing api keys runtime update
api-1 | 1:06PM DBG processing external_backends.json
api-1 | 1:06PM DBG external backends loaded from external_backends.json
api-1 | 1:06PM INF core/startup process completed!
api-1 | 1:06PM DBG No configuration file found at /tmp/localai/upload/uploadedFiles.json
api-1 | 1:06PM DBG No configuration file found at /tmp/localai/config/assistants.json
api-1 | 1:06PM DBG No configuration file found at /tmp/localai/config/assistantsFile.json
api-1 | 1:06PM DBG GPU vendor gpuVendor=nvidia
api-1 | 1:06PM INF LocalAI API is listening! Please connect to the endpoint for API documentation. endpoint=http://0.0.0.0:8085
api-1 | 1:09PM WRN Client error ip=172.22.0.5 latency="83.538µs" method=POST status=404 url=/rerank
api-1 | 1:09PM WRN SetDefaultModelNameToFirstAvailable used with no matching models installed
api-1 | 1:09PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:09PM DBG guessDefaultsFromFile: template already set name=lb-reranker-0.5b-v1.0
api-1 | 1:09PM DBG JINA Rerank Request received model=lb-reranker-0.5b-v1.0
api-1 | 1:09PM DBG Loading from the following backends (in order): [llama-cpp llama-cpp-fallback piper silero-vad stablediffusion-ggml whisper huggingface]
api-1 | 1:09PM INF Trying to load the model 'lb-reranker-0.5b-v1.0' with the backend '[llama-cpp llama-cpp-fallback piper silero-vad stablediffusion-ggml whisper huggingface]'
api-1 | 1:09PM INF [llama-cpp] Attempting to load
api-1 | 1:09PM INF BackendLoader starting backend=llama-cpp modelID=lb-reranker-0.5b-v1.0 o.model=lb-reranker-0.5B-v1.0-Q4_K_M.gguf
api-1 | 1:09PM DBG Loading model in memory from file: /models/lb-reranker-0.5B-v1.0-Q4_K_M.gguf
api-1 | 1:09PM DBG Loading Model lb-reranker-0.5b-v1.0 with gRPC (file: /models/lb-reranker-0.5B-v1.0-Q4_K_M.gguf) (backend: llama-cpp): {backendString:llama-cpp model:lb-reranker-0.5B-v1.0-Q4_K_M.gguf modelID:lb-reranker-0.5b-v1.0 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0a4d562c8 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 parallelRequests:false}
api-1 | 1:09PM DBG [llama-cpp-fallback] llama-cpp variant available
api-1 | 1:09PM INF [llama-cpp] attempting to load with AVX2 variant
api-1 | 1:09PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-avx2
api-1 | 1:09PM DBG GRPC Service for lb-reranker-0.5b-v1.0 will be running at: '127.0.0.1:36719'
api-1 | 1:09PM DBG GRPC Service state dir: /tmp/go-processmanager2933902333
api-1 | 1:09PM DBG GRPC Service Started
api-1 | 1:09PM DBG Wait for the service to start up
api-1 | 1:09PM DBG Options: ContextSize:32768 Seed:995571563 NBatch:512 MMap:true NGPULayers:99999999 Threads:16 Options:"gpu"
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr I0000 00:00:1753103380.733650 76 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache, work_serializer_dispatch
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr I0000 00:00:1753103380.733815 76 ev_epoll1_linux.cc:125] grpc epoll fd: 3
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr I0000 00:00:1753103380.733952 76 server_builder.cc:392] Synchronous server. Num CQs: 1, Min pollers: 1, Max Pollers: 2, CQ timeout (msec): 10000
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr I0000 00:00:1753103380.734832 76 ev_epoll1_linux.cc:359] grpc epoll fd: 5
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr I0000 00:00:1753103380.735053 76 tcp_socket_utils.cc:634] TCP_USER_TIMEOUT is available. TCP_USER_TIMEOUT will be used thereafter
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stdout Server listening on 127.0.0.1:36719
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr start_llama_server: starting llama server
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr start_llama_server: waiting for model to be loaded
api-1 | 1:09PM DBG GRPC Service Ready
api-1 | 1:09PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:0xc0003bce58} sizeCache:0 unknownFields:[] Model:lb-reranker-0.5B-v1.0-Q4_K_M.gguf ContextSize:32768 Seed:995571563 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:16 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/lb-reranker-0.5B-v1.0-Q4_K_M.gguf PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 LoadFormat: DisableLogStatus:false DType: LimitImagePerPrompt:0 LimitVideoPerPrompt:0 LimitAudioPerPrompt:0 MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false ModelPath:/models LoraAdapters:[] LoraScales:[] Options:[gpu] CacheTypeKey: CacheTypeValue: GrammarTriggers:[] Reranking:false}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr build: 5767 (72babea5) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr system info: n_threads = 16, n_threads_batch = -1, total_threads = 32
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr ggml_cuda_init: found 1 CUDA devices:
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr Device 0: NVIDIA RTX A5000, compute capability 8.6, VMM: yes
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr system_info: n_threads = 16 / 32 | CUDA : ARCHS = 500,610,700,750,800,860,890 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr srv load_model: loading model '/models/lb-reranker-0.5B-v1.0-Q4_K_M.gguf'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX A5000) - 20857 MiB free
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: loaded meta data with 42 key-value pairs and 290 tensors from /models/lb-reranker-0.5B-v1.0-Q4_K_M.gguf (version GGUF V3 (latest))
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 0: general.architecture str = qwen2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 1: general.type str = model
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 2: general.name str = Reranker_0.5_Cont_Filt_7Max
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 3: general.version str = v1.0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 4: general.organization str = Lightblue
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 5: general.basename str = lb-reranker
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 6: general.size_label str = 0.5B
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 7: general.license str = apache-2.0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 8: general.base_model.count u32 = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 0.5B Instruct
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-0...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 12: general.dataset.count u32 = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 13: general.dataset.0.name str = Reranker_Continuous_Filt_Max7_Train
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 14: general.dataset.0.organization str = Lightblue
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 15: general.dataset.0.repo_url str = https://huggingface.co/lightblue/rera...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 16: general.tags arr[str,2] = ["reranker", "text-generation"]
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 17: general.languages arr[str,96] = ["en", "zh", "es", "de", "ar", "ru", ...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 18: qwen2.block_count u32 = 24
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 19: qwen2.context_length u32 = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 20: qwen2.embedding_length u32 = 896
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 21: qwen2.feed_forward_length u32 = 4864
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 22: qwen2.attention.head_count u32 = 14
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 23: qwen2.attention.head_count_kv u32 = 2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 24: qwen2.rope.freq_base f32 = 1000000.000000
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 25: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 27: tokenizer.ggml.pre str = qwen2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 31: tokenizer.ggml.eos_token_id u32 = 151645
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 32: tokenizer.ggml.padding_token_id u32 = 151643
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 151643
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = false
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 35: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 36: general.quantization_version u32 = 2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 37: general.file_type u32 = 15
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 38: quantize.imatrix.file str = /models_out/lb-reranker-0.5B-v1.0-GGU...
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 39: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 40: quantize.imatrix.entries_count i32 = 168
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - kv 41: quantize.imatrix.chunks_count i32 = 128
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - type f32: 121 tensors
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - type q5_0: 132 tensors
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - type q8_0: 13 tensors
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - type q4_K: 12 tensors
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_model_loader: - type q6_K: 12 tensors
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: file format = GGUF V3 (latest)
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: file type = Q4_K - Medium
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: file size = 373.71 MiB (6.35 BPW)
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load: special tokens cache size = 22
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load: token to piece cache size = 0.9310 MB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: arch = qwen2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: vocab_only = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_ctx_train = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_embd = 896
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_layer = 24
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_head = 14
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_head_kv = 2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_rot = 64
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_swa = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: is_swa_any = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_embd_head_k = 64
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_embd_head_v = 64
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_gqa = 7
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_embd_k_gqa = 128
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_embd_v_gqa = 128
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: f_norm_eps = 0.0e+00
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: f_norm_rms_eps = 1.0e-06
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: f_clamp_kqv = 0.0e+00
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: f_max_alibi_bias = 0.0e+00
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: f_logit_scale = 0.0e+00
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: f_attn_scale = 0.0e+00
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_ff = 4864
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_expert = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_expert_used = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: causal attn = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: pooling type = -1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: rope type = 2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: rope scaling = linear
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: freq_base_train = 1000000.0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: freq_scale_train = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_ctx_orig_yarn = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: rope_finetuned = unknown
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: ssm_d_conv = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: ssm_d_inner = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: ssm_d_state = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: ssm_dt_rank = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: ssm_dt_b_c_rms = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: model type = 1B
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: model params = 494.03 M
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: general.name = Reranker_0.5_Cont_Filt_7Max
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: vocab type = BPE
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_vocab = 151936
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: n_merges = 151387
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: BOS token = 151643 '<|endoftext|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOS token = 151645 '<|im_end|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOT token = 151645 '<|im_end|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: PAD token = 151643 '<|endoftext|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: LF token = 198 'Ċ'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: FIM PRE token = 151659 '<|fim_prefix|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: FIM SUF token = 151661 '<|fim_suffix|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: FIM MID token = 151660 '<|fim_middle|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: FIM PAD token = 151662 '<|fim_pad|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: FIM REP token = 151663 '<|repo_name|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: FIM SEP token = 151664 '<|file_sep|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOG token = 151643 '<|endoftext|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOG token = 151645 '<|im_end|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOG token = 151662 '<|fim_pad|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOG token = 151663 '<|repo_name|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: EOG token = 151664 '<|file_sep|>'
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr print_info: max token length = 256
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load_tensors: loading model tensors, this can take a while... (mmap = true)
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load_tensors: offloading 24 repeating layers to GPU
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load_tensors: offloading output layer to GPU
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load_tensors: offloaded 25/25 layers to GPU
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load_tensors: CPU_Mapped model buffer size = 137.94 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr load_tensors: CUDA0 model buffer size = 373.73 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr .................................................
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: constructing llama_context
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: n_seq_max = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: n_ctx = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: n_ctx_per_seq = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: n_batch = 512
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: n_ubatch = 512
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: causal_attn = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: flash_attn = 0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: freq_base = 1000000.0
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: freq_scale = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: CUDA_Host output buffer size = 0.58 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_kv_cache_unified: CUDA0 KV buffer size = 384.00 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_kv_cache_unified: size = 384.00 MiB ( 32768 cells, 24 layers, 1 seqs), K (f16): 192.00 MiB, V (f16): 192.00 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: CUDA0 compute buffer size = 967.00 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: CUDA_Host compute buffer size = 65.76 MiB
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: graph nodes = 942
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr llama_context: graph splits = 2
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr common_init_from_params: setting dry_penalty_last_n to ctx_size = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
api-1 | 1:09PM INF [llama-cpp] Loads OK
api-1 | 1:09PM ERR Server error error="rpc error: code = Unimplemented desc = This server does not support reranking. Start it with
--reranking
and without--embedding
" ip=172.22.0.5 latency=2.896114785s method=POST status=500 url=/v1/rerankapi-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr srv init: initializing slots, n_slots = 1
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr slot init: id 0 | task -1 | new slot n_ctx_slot = 32768
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr start_llama_server: model loaded
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr start_llama_server: chat template, chat_template: {%- if tools %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>system\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if messages[0]['role'] == 'system' %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- messages[0]['content'] }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- else %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within XML tags:\n" }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- for tool in tools %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- "\n" }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- tool | tojson }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endfor %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- "\n\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": , "arguments": }\n</tool_call><|im_end|>\n" }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- else %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if messages[0]['role'] == 'system' %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- else %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- for message in messages %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- elif message.role == "assistant" %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>' + message.role }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if message.content %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '\n' + message.content }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- for tool_call in message.tool_calls %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if tool_call.function is defined %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- set tool_call = tool_call.function %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '\n<tool_call>\n{"name": "' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- tool_call.name }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '", "arguments": ' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- tool_call.arguments | tojson }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '}\n</tool_call>' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endfor %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_end|>\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- elif message.role == "tool" %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>user' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '\n<tool_response>\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- message.content }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '\n</tool_response>' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_end|>\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endfor %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- if add_generation_prompt %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {{- '<|im_start|>assistant\n' }}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr {%- endif %}
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr , example_format: '<|im_start|>system
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr You are a helpful assistant<|im_end|>
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr <|im_start|>user
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr Hello<|im_end|>
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr <|im_start|>assistant
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr Hi there<|im_end|>
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr <|im_start|>user
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr How are you?<|im_end|>
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr <|im_start|>assistant
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr '
api-1 | 1:09PM DBG GRPC(lb-reranker-0.5b-v1.0-127.0.0.1:36719): stderr srv update_slots: all slots are idle
api-1 | 1:09PM WRN SetDefaultModelNameToFirstAvailable used with no matching models installed
api-1 | 1:09PM DBG guessDefaultsFromFile: NGPULayers set NGPULayers=99999999
api-1 | 1:09PM DBG guessDefaultsFromFile: template already set name=lb-reranker-0.5b-v1.0
api-1 | 1:09PM DBG JINA Rerank Request received model=lb-reranker-0.5b-v1.0
api-1 | 1:09PM DBG Model already loaded in memory: lb-reranker-0.5b-v1.0
api-1 | 1:09PM DBG Checking model availability (lb-reranker-0.5b-v1.0)
api-1 | 1:09PM DBG Model 'lb-reranker-0.5b-v1.0' already loaded
api-1 | 1:09PM ERR Server error error="rpc error: code = Unimplemented desc = This server does not support reranking. Start it with
--reranking
and without--embedding
" ip=172.22.0.5 latency=511.015859ms method=POST status=500 url=/v1/rerankBeta Was this translation helpful? Give feedback.
All reactions