Generating 'Llava Api Keys' is a problem

For generating Ai Image Captioning on our ‘D’ sites, either we can use Open_AI’s paid plans (you’ve to even keep some advance credit balance in your account to even begin), or Free & Open Llava Ai Api.

But to get some guidance on how to generate Llava Api keys, I’m Googling for past 3 hours, but even after watching many videos, I’ve not been able to get any direction in right direction.

Llava interface/website doesn’t seem to have any option to generate the needed Api keys straightway:

I think it’d be very precious for users if there was a just ‘a small link’ which could throw inquisitive users in the right direction. Something similar to this:
The image shows a screenshot of a configuration setting labeled "ai google custom search api key," with a blurred out section that likely represents a censored API key, and there is a red hand-drawn circle highlighting the description below, which includes a URL to the Google Custom Search API developers page.

Or better, to this:
image

Is this what you are looking for? Get your API Token

1 Like

Thank you.

But I found that it perhaps only helped me move one step forward. Because whereas for Google Gemini as soon as I filled that key in my D-Site settings, everything depending on Gemini started working perfectly well.

But even after filling this Hugging Face Api secret key (you guided me to) in the Disco-Settings, Image Captioning give ‘Error 500’ (the same image captioning works ok if I choose ‘Open-Ai Gpt4-Vision Preview’ as the Image Captioning Model.

And also, because Llava seems to be different as there are so many empty fields in D-Site Settings named Hugging Face or Llava (why do they use Llava in one place and Hugging Face at other is also adding to the confusion), that I’m sure won’t be redundant.

So, can you point me towards some resource on the internet, which could help me getting the values for all these empty fields in D-Site-Settings, or in implementing this properly.

I think there is some configuration information here:

1 Like

For LLaVa, we only support self-hosting via the ghcr.io/xfalcox/llava:latest container image at the moment.

If you do have access to a server with GPU with at least 24GB VRAM, you can self-host it, otherwise I recommend sticking to GPT-4V.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.