|
|
b7e6239216
|
refatoracao
|
2026-03-23 23:38:56 +00:00 |
|
Marcos
|
8002262cf7
|
Change default Ollama model from qwen2.5-coder to llama3.2:1b for faster chat
|
2026-03-22 17:25:11 -03:00 |
|
Marcos
|
17dcb9d178
|
Increase Ollama timeout to 180s and add num_ctx
|
2026-03-22 16:51:21 -03:00 |
|
Marcos
|
2cc4ed0d18
|
Fix Ollama endpoint: use http://ollama:11434
|
2026-03-22 16:40:27 -03:00 |
|
Marcos
|
a74978da4a
|
Add Ollama connection check and better error messages
|
2026-03-22 16:31:08 -03:00 |
|
Marcos
|
bd0cbf8769
|
fix: Correct Ollama endpoint and BotVPS path
- Ollama: Use ollama-lw4s8g4gc8gss4gkc4gg0wk4 hostname instead of localhost
- BotVPS: Path is /app inside container
- Improve detect_git_repo_path to find correct paths
- Update planner prompt with correct context
|
2026-03-22 16:03:41 -03:00 |
|
Marcos
|
64731a24a5
|
Stability: CPU fix with psutil interval and LLM timeouts
|
2026-03-22 14:36:20 -03:00 |
|