✨ Features
Single Text Embedding
Generate embeddings for individual text queries via the /embed endpoint
Batch Processing
Process multiple texts simultaneously with the /embed-batch endpoint
Optimized Model
Uses Nomic's specialized embedding model optimized for search queries [1](#2-0)
Fast Performance
Built with Hono framework for optimal speed and efficiency
Error Handling
Comprehensive error handling and logging for reliable operation
Easy Integration
RESTful API design for seamless integration with any application
📦 Installation
🚀 Usage
Start the server:
The service runs on port 3000 by default.
🔌 API Endpoints
POST /embed
Generate embedding for a single text.
Request:
Response:
POST /embed-batch
Generate embeddings for multiple texts.
Request:
Response:
🤖 Model Information
This service uses the Xenova/nomic-embed-text-v1 model, which is specifically designed for text embedding tasks and optimized for symmetric search applications [1](#2-0) . The model requires a "search_query:" prefix for optimal performance with query texts.
📄 License
This project is licensed under the MIT License [2](#2-1) .
📝 Important Notes
- The service automatically adds the "search_query:" prefix to input texts as required by the Nomic model
- All embeddings are normalized and use mean pooling
- Batch processing is handled sequentially to maintain accuracy
- The service allows remote model loading via
env.allowRemoteModels = true