Is a pip library, offering a range of transformers for computing embeddings: sentence_transformers
Computing embeddings
Lists or strings as input
You can either pass a list or a string, both generating embeddings - either for the each string in the list, or the single string. Thus, both cases are generating almost the same embedding for “hello world”.
embeddings = model.encode(["hello world"])
embeddingsSingle = model.encode("hello world")
# embeddings[0] == embeddingsSingle
The reason for why it is only almost is the randomness in initialisation. The appraoch is not fully deterministic, thus, the embeddings vary a tiny bit - but not so much I would actually matter.
You might want to prefer providing a list, as it is faster to provide a batch.