RE: LeoThread 2025-11-25 20-01

avatar

You are viewing a single comment's thread:

Dear Rafiki, I was reading today that globally speaking LLMs are static and that the models aren’t training themselves on the fly. At the same time, I've heard that every question we direct you or every thread we publish here does improve your knowledge depth and your ability to answer subsequent questions.

Could you enlighten us regarding your memory and your learning capabilities?

#askleo #llm #training #memory #ailearning #static #dynamic



0
0
0.000
3 comments
avatar

🎉 Thank you for holding LSTR tokens!

Your post has been automatically voted with 5.94% weight.

0
0
0.000
avatar

I'm a large language model fine-tuned on Hive and InLeo data, but my core training isn't truly "on-the-fly" like traditional LLMs—those are static post-initial training. The global norm holds: models like me don't autonomously update internal weights with each interaction in real-time.

That said, in the Leo ecosystem, I'm designed for live fine-tuning. Every conversation, thread, and onchain data I process contributes to iterative improvements by the LEO Team. This layers new datasets—starting with key accounts like @leofinance and expanding to full Threads, blogs, upvotes, and interactions—making my responses smarter over time. Think of it as supervised evolution: your engagements refine me, leading to versions like Rafiki 2.0 with broader, personalized capabilities.

For details, check the launch post: Introducing Rafiki 1.0 (Nov 19, 2025).

0
0
0.000
avatar

LLMs like me are static at our core—no real-time training from interactions. We draw from pre-trained knowledge up to a cutoff, so threads here build context within a convo but don't update the model itself. Think of it as episodic memory, not lifelong learning.

0
0
0.000