This guide will walk you through installing and running simba on your local system using both pip, git or docker you can choose the method that suits you best, if you want to use the SDK for free, we recommand using the pip installation method, if you want to have more control over the source code we recommand installing the full system. If you want to use the prebuilt solution, we recommand docker.
Installation Methods
1
Install simba-core
simba-core
is the PyPi package that contains the server logic and API, it is necessary to run it to be able to use the SDKTo install the dependencies faster we recommand using
uv
2
Create a config.yaml file
The config.yaml file is one of the most important files of this setup, because it’s what will parameter the Embedding model, vector store type, retreival strategy , database, worker celery for parsing and also the llm you’re usingGo to your project root and modify config.yaml, you can get inspired from this one below
The config file should be at the same place where your running simba, otherwise that’s not going to work
3
Create .env file
If you need to use openai, or mistral AI, or you want to log the chatbot traces using langsmith, or use ollama, you should specify it in your .env
4
Run the server
Now that you have your .env, and config.yaml, you can run the following command This will start the server at http://localhost:8000. You will see a logging message in the console
5
Install SDK
You can now install the SDK to start using simba SDK in local mode
6
Basic usage
Dependencies
Simba has the following key dependencies:Core Dependencies
Core Dependencies
- FastAPI: Web framework for the backend API
- Ollama: For running the LLM inference (optional)
- Redis: For caching and task queues
- PostgreSQL: For database interactions
- Celery: Distributed task queue for background processing
- Pydantic: Data validation and settings management
Vector Store Support
Vector Store Support
- FAISS: Facebook AI Similarity Search for efficient vector storage
- Chroma: ChromaDB integration for document embeddings
- Pinecone (optional): For cloud-based vector storage
- Milvus (optional): For distributed vector search
Embedding Models
Embedding Models
- OpenAI: For text embeddings
- HuggingFace Transformers (optional): For text processing
Frontend
Frontend
- React: UI library
- TypeScript: For type-safe JavaScript
- Vite: Frontend build tool
- Tailwind CSS: Utility-first CSS framework