Hosted LangServe comes with a built in playground and docs site that you can share with your colleagues to try out the application and get feedback faster.
Easy to get started
Hosted LangServe integrates with GitHub, so once you connect LangServe to your repository, the service will take care of spinning up a container to run your application.
Observability built in
Hosted LangServe is deeply integrated into LangSmith by default. All inputs / outputs from your server are automatically logged to LangSmith, so you can easily debug issues and understand your chain’s behavior.
Native regression testing
Hosted LangServe will show you the exact branch and commit deployed at any given time and how that version of your application is performing.
Get documentation for your LangServe application built automatically, so that your users and collaborators can take advantage of all the supported endpoints.
Hosted LangServe applications are hosted in GCP us-central-1.
What endpoints will my application support out of the box?
You will receive access to invoke, batch, and stream endpoints that mirror the functionality of the Runnables API. You will also receive access to a feedback endpoint where you can add user feedback to runs. For a full list of endpoints, please click on your deployment's URL in your hosted LangServe application, which will redirect to the accompanying docs page.
Do you support an SLA or failover for Hosted LangServe?
Not at the moment since the product is still in beta.
How big are Hosted LangServe instances?
We support up to 1GB of memory on each deployment. If you have a use case that requires more memory, please reach out to email@example.com with your specific use case.
Ready to start shipping reliable GenAI apps faster?
LangChain and LangSmith are critical parts of the reference architecture to get you from prototype to production.