RE: Playing Dungeons and Dragons with AI
You are viewing a single comment's thread:
I actually don't mind the text option. I'm old enough where I loved playing Zork on my Atari computer so text based RPGs are in my blood. I'm not going to lie, you lost me with the "context window" and not being able to use it locally. But I'm guessing you are saying if I am not a developer with the most bad ass computer ever, I don't really need anything beyond the free version... or maybe even that is beyond what I can handle.
0
0
0.000
Context window is the memory of an ai.
Let’s say you open a new chat with ai and say tell me a story.
The ai responds with a story. You then ask it to continue the story.
This entire chat history is continuously resent to the ai to run through as if it all one text file. Your system prompt (typically “you are a helpful agent….”) and all other “context” given to the ai. So as your conversation gets longer you use more and more context until you start a new chat. Because the entire context is processed at once as if it all was a new chat, it gets slower and slower as the context fills up.
Gemini context window can hold 1M tokens , compared to most models that have 128k-200k this is huge, roughly 2.500 pages of text. This is a key demand of something like a d&d campaign that goes on for a very long period of time. You can’t restart a new chat and continue the adventure without feeding it the previous parts of the story. This is the main limitation of ai for story development, gaming, and things like movie writing.
That helps tremendously. Thanks. I get it now. It has to read what it already wrote before it can write the next thing. It doesn't just remember what it wrote earlier and adds to it. So by the end of your session, it was reading everything it had written since the start before it could take the next action.
Yes. The entire context gets pushed through the neural network every time you respond.
There are some tricks that can be done to compress the context when it starts filling up the window while retaining much of the meaning. Think of it like summarizing the previous conversation so you start a new chat session while keeping most of the information but with much more context window available to use.
As you fill the context window it gets slower and slower to respond and for public services like ChatGPT you blow through your quotas really fast. You might only type in line to ask the ai another question but the pages and pages of the chat keep getting reprocessed. This is a major limitation with story telling and programming.