Congrats on the launch! I'm building an app that interfaces with OpenAI GPT models and they recently released an API to upload and create text embeddings.
I watched most of your Loom and was left wondering why wouldn't I use them directly vs you?
Thank you and good question! If you're comfortable with the quality of OpenAI's embeddings, performing your own chunking, rolling your own integration with a vector db, and don't need Vellum's other features that surround the usage of those embeddings, then Vellum is probably not a good fit. Vellum's Search offering is most valuable to companies that want to be able to experiment with different embedding models, don't want to manage their own semantic search infra, and want a tight integration with how those embeddings are used downstream.
I watched most of your Loom and was left wondering why wouldn't I use them directly vs you?