Skip to main content
GPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc. This notebook explains how to use GPT4All embeddings with LangChain.

Install GPT4All’s Python Bindings

pip install -qU  gpt4all > /dev/null
from langchain_community.embeddings import GPT4AllEmbeddings
gpt4all_embd = GPT4AllEmbeddings()
Model downloaded at:  /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin
text = "This is a test document."

Embed the Textual Data

query_result = gpt4all_embd.embed_query(text)
With embed_documents you can embed multiple pieces of text. You can also map these embeddings with Nomic’s Atlas to see a visual representation of your data.
doc_result = gpt4all_embd.embed_documents([text])

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.