|
|
0c25fa59d8
|
🚀 Auto-deploy: BotVPS atualizado em 28/03/2026 17:15:38
|
2026-03-28 17:15:38 +00:00 |
|
|
|
5a6b196ade
|
🚀 Auto-deploy: BotVPS atualizado em 27/03/2026 21:46:48
|
2026-03-27 21:46:48 +00:00 |
|
|
|
15a777acbd
|
🚀 Auto-deploy: BotVPS atualizado em 27/03/2026 16:54:14
|
2026-03-27 16:54:14 +00:00 |
|
|
|
27d12ff9c4
|
🚀 Auto-deploy: BotVPS atualizado em 24/03/2026 21:36:58
|
2026-03-24 21:36:58 +00:00 |
|
|
|
f92ee9d2b9
|
🚀 Auto-deploy: BotVPS atualizado em 24/03/2026 11:41:41
|
2026-03-24 11:41:41 +00:00 |
|
|
|
3a5fecb344
|
🚀 Auto-deploy: BotVPS atualizado em 24/03/2026 11:32:15
|
2026-03-24 11:32:15 +00:00 |
|
|
|
7d499068e8
|
🚀 Auto-deploy: BotVPS atualizado em 24/03/2026 11:29:44
|
2026-03-24 11:29:44 +00:00 |
|
|
|
11e41e44be
|
🚀 Auto-deploy: BotVPS atualizado em 24/03/2026 11:24:05
|
2026-03-24 11:24:05 +00:00 |
|
|
|
b7e6239216
|
refatoracao
|
2026-03-23 23:38:56 +00:00 |
|
Marcos
|
8002262cf7
|
Change default Ollama model from qwen2.5-coder to llama3.2:1b for faster chat
|
2026-03-22 17:25:11 -03:00 |
|
Marcos
|
17dcb9d178
|
Increase Ollama timeout to 180s and add num_ctx
|
2026-03-22 16:51:21 -03:00 |
|
Marcos
|
2cc4ed0d18
|
Fix Ollama endpoint: use http://ollama:11434
|
2026-03-22 16:40:27 -03:00 |
|
Marcos
|
64731a24a5
|
Stability: CPU fix with psutil interval and LLM timeouts
|
2026-03-22 14:36:20 -03:00 |
|
Marcos
|
fde085835b
|
Restore: Reverted ai_agent.py to last known stable commit (6cf2c30)
|
2026-03-22 14:05:17 -03:00 |
|
Marcos
|
39eeaf95bd
|
Fix: Restored full agent capabilities and fixed server variables
|
2026-03-22 13:59:39 -03:00 |
|
Marcos
|
12427dcb46
|
Fix: Restored system_prompt_base and tools_desc to fix server crash (Error 500)
|
2026-03-22 13:57:17 -03:00 |
|
Marcos
|
4d153d7a9e
|
Gemini: Corrected model to gemini-2.0-flash
|
2026-03-22 13:55:03 -03:00 |
|
Marcos
|
8b195b37fb
|
Fix: Restored AI IQ and conversational fluency for local models
|
2026-03-22 13:23:23 -03:00 |
|
Marcos
|
a20b32b70b
|
Performance: Implemented Turbo Lite mode for Ollama local
|
2026-03-22 13:16:39 -03:00 |
|
Marcos
|
03b1793be9
|
Optimization: Improved context handling and lowered Ollama temperature for better local performance
|
2026-03-22 13:08:52 -03:00 |
|
Marcos
|
75d2a16fec
|
Fix: Unified Context Memory between Web and Telegram
|
2026-03-22 12:40:06 -03:00 |
|
Marcos
|
8882b95650
|
Fix: AI Agent specialized on VPS_Sync path and image sending tags
|
2026-03-22 12:21:42 -03:00 |
|
Marcos
|
a7d873ba07
|
Fix: AI agent now checks permissions if find fails
|
2026-03-22 12:05:22 -03:00 |
|
Marcos
|
f53b1085df
|
Debug: Enabling full error output on file search
|
2026-03-22 11:46:56 -03:00 |
|
Marcos
|
b16b295a84
|
Fix: Upgrade to gemini-2.5-flash verified by API key
|
2026-03-22 11:28:39 -03:00 |
|
Marcos
|
b787cb7baa
|
Fix: Update Gemini model to stable gemini-1.5-flash
|
2026-03-22 11:26:57 -03:00 |
|
Marcos
|
0d774f7486
|
Fix: AI agent cleanup, telemetry and loop increase
|
2026-03-22 11:23:39 -03:00 |
|
Marcos
|
2fc19a88a2
|
agente n7
|
2026-03-22 10:55:16 -03:00 |
|
Marcos
|
8e5258d804
|
Fix: Restore ai_agent and add Gemini Env support
|
2026-03-22 10:19:54 -03:00 |
|
Marcos
|
6589c62b18
|
1
|
2026-03-22 10:10:27 -03:00 |
|
Marcos
|
cc61da67d9
|
feat: endpoint to display host images in insights and update prompt
|
2026-03-22 00:08:16 -03:00 |
|
Marcos
|
be4577d237
|
Fix ReAct loop and tool arguments
|
2026-03-21 23:24:07 -03:00 |
|
|
|
3e2e81bd64
|
feat: upgrade interface web e suporte a áudio completo
|
2026-03-22 01:05:27 +00:00 |
|
|
|
5e8acefa9a
|
🚀 Initial deploy to Gitea with fixes and dashboard enhancements
|
2026-03-21 19:16:10 +00:00 |
|