RE: The More You Push the AI to Build an App, the More Confused It Gets

avatar

You are viewing a single comment's thread:

So the way I understand Token Consumption is basically all the tokens a Model can use at 1 time. so breaking up the code into smaller section won't help the token consumption so much EXCEPT...

You code can be broken into section. So for example when I was trying to figure out my Witness Node. I would break my chats up into section. So I would have a chat about Linux commands and how to do what I needed to do. Then I had a chat for just downloading the specific file and steps on how to setup a node. Then a chat for what to change in the config file (since that was its own chunk of code).

Splitting it up that way would help for sure but I am not sure that applies to what you are doing. However if you just have like 1000 lines of code right for a app.

Find a good chunk that is sort of self contained like a subsection of code for idk displaying a video. Feed it that code and solve that smaller problem. Then work your way outwards to the bigger context. Granted that may not work with dependencies and all that but you never know until you try :)



0
0
0.000
1 comments
avatar

Thanks! So, you broke your prompts down into section to make it easier for the AI to focus on specific problems. Makes sense. I would have done the same, had I had to tackle a complex problem.

0
0
0.000