Cheapest GPT-4 Turbo, GPT 4 Vision, ChatGPT OpenAI AI API

FREEMIUM
Por NextAPI | Atualizado il y a 3 jours | Artificial Intelligence/Machine Learning
Popularidade

9.8 / 10

Latência

173ms

Nível de serviço

100%

Health Check

100%

Voltar para todas as discussões

how to setup my own local server with this api?

Rapid account: Thelege 2nd
thelege2nd
il y a 2 mois

I want to subscribe, but i want to know if this will serve my use case. I want to use this API to power up AI apps locally, such as autogen, crew ai or any app that allow adding custom api base url. The problem is, those apps allow only adding a base url such as localhost:8080. Do I need to create my own local server and send the requests to my local server and then those requests will be sent to this API? if yes, how can i do that?

Rapid account: Next API
NextAPI Commented il y a 2 mois

Unfortunately having the RapidAPI as the base url won’t work. The header RapidAPI uses is “X-RapidAPI-Key: RAPIDAPI_API_KEY” while OpenAI uses “Authorization: Bearer $OPENAI_API_KEY”.

But this problem is fixable, you would just need to update the application’s code you are using to work for that. Apart from the API key everything should be fine as we designed our API to be similar to OpenAI’s as much as possible, the format and url syntax is same for us so if you fix the API key problem everything should work fine.

I would recommend either doing that or making your own server which you run locally.

Also we know some people like using the official OpenAI libraries, and those should be working as well as long as you modify the code to send your API key in the RapidAPI format.

Hope this was helpful, do keep us updated and let us know if you have any other questions!

Rapid account: Thelege 2nd
thelege2nd Commented il y a 2 mois

can I send straight to this api as in the image below without creating my own local server.

My X-RapidAPI-Key is basically the replacement for openAI API
and X-RapidAPI-Host URL is the base LLM URL?

https://i.imgur.com/NLmrcsF.png

is this a bad practice in anyway or will it even work?

Rapid account: Next API
NextAPI Commented il y a 2 mois

Hey thelege2nd,

Yeah our API would server your use case. To achieve what you need you would create your own server just like how you said you, then would put your local url to the application you are trying to use. You would mimic the format that OpenAI gets, for example they use the “/v1/chat/completions” endpoint to get the response, so you would create that in your server with that and that will work for any application needing a api base url.

You can check out how to mimic the OpenAI API here on their docs: https://platform.openai.com/docs/guides/text-generation/chat-completions-api

Hope that was helpful! Do let us know if you have any other questions or how it goes with it!

  • Jack from the NextAPI

Junte-se à discussão - adicione o comentário abaixo:

Efetue login / inscreva-se para postar novos comentários