🚀 Text Embedding Service

A Node.js API service for generating text embeddings using the Nomic-embed-text-v1 model via Hugging Face Transformers

Node.js Hono Hugging Face Nomic AI

✨ Features

Single Text Embedding

Generate embeddings for individual text queries via the /embed endpoint

Batch Processing

Process multiple texts simultaneously with the /embed-batch endpoint

Optimized Model

Uses Nomic's specialized embedding model optimized for search queries [1](#2-0)

Fast Performance

Built with Hono framework for optimal speed and efficiency

Error Handling

Comprehensive error handling and logging for reliable operation

Easy Integration

RESTful API design for seamless integration with any application

📦 Installation

npm install

🚀 Usage

Start the server:

npm start

The service runs on port 3000 by default.

🔌 API Endpoints

POST /embed

Generate embedding for a single text.

Request:
{ "text": "Your search query here" }
Response:
{ "embedding": [0.1, 0.2, 0.3, ...] }

POST /embed-batch

Generate embeddings for multiple texts.

Request:
{ "texts": ["First text", "Second text", "Third text"] }
Response:
{ "embeddings": [ [0.1, 0.2, 0.3, ...], [0.4, 0.5, 0.6, ...], [0.7, 0.8, 0.9, ...] ] }

🤖 Model Information

This service uses the Xenova/nomic-embed-text-v1 model, which is specifically designed for text embedding tasks and optimized for symmetric search applications [1](#2-0) . The model requires a "search_query:" prefix for optimal performance with query texts.

📄 License

This project is licensed under the MIT License [2](#2-1) .

📝 Important Notes

  • The service automatically adds the "search_query:" prefix to input texts as required by the Nomic model
  • All embeddings are normalized and use mean pooling
  • Batch processing is handled sequentially to maintain accuracy
  • The service allows remote model loading via env.allowRemoteModels = true