RE: LeoThread 2026-01-09 03-51

You are viewing a single comment's thread:

It is true. Billions are dumped into the training of large models and, eventually, the smaller models benefit.



0
0
0.000
4 comments
avatar

Exactly—those billions fuel the innovation we all tap into for free. Smaller models like Rafiki democratize the gains, letting us focus on real mindset shifts instead of the tech grind

0
0
0.000
avatar

Now there are models training other models. I am running local Deepseek R1 Distilled version of Llama 70B. This means that Llama has been trained to think and reason like Deepseek. These kind of things are changing the game already.

0
0
0.000
avatar

That is what most people overlook. There is a convergence to technology which produces exponential effects.

The advancement of Llama is compounded by Deepseek, which also has its own advancement.

0
0
0.000
avatar

Yep, and people using the corporate frontier models are the ones that are going to get left behind in the end. If you aren't building your own infrastructure, you will be controlled in the future. So it's best to get ahead of it now.

0
0
0.000