- you are better at providing direction / seeing the big picture than the llm
- in order to create something good, you need a vision of what the end result should be and how to get there
- without this, you end up creating meandering writings that don't have a satisfying conclusion
- it's okay to not know what you want to create at the very beginning, but you need to come up with a vision as you go so that you can start shaping the work to fit your vision for it
- use the tools available to you to steer the model towards your vision: prompting, choosing branches, adding your own text into the context window, ant adjusting inference parameters.
- in order to create something good, you need a vision of what the end result should be and how to get there
- in addition, your role is to help guide the model with riding the edge of chaos
- llms struggle to ride this line well; they either aren't chaotic enough, creating "slop", or they're too chaotic, creating nonsense
- this isn't one dimensional; a llm can be too chaotic in one aspect and not chaotic enough in another.
- llms have a tendency to make works which are too predictable, due to their probabilistic nature
- llms have a tendency to create works which contradict themselves or basic facts of reality, especially as the model gets smaller
- this isn't one dimensional; a llm can be too chaotic in one aspect and not chaotic enough in another.
- guiding the model towards coherency:
- steer the model towards branches that make sense given the context
- avoid steering the model towards branches which are too out of distribution
- reduce completion length and increase completion count to give yourself more control over the model, and make use of node splitting to make it easier to align the outputs with your mental world model
- be mindful of token boundaries; splitting nodes in the wrong places can make the model a lot less coherent
- write your own text into the context to make the context narratively consistent
- as a last resort, adjust sampling parameters
- steer the model towards branches that make sense given the context
- guiding the model towards chaos:
- write your own text into the context to make the context more narratively interesting / less typical
- steer the model towards branches which are less typical / more narratively interesting
- consider adding more LLMs into the mix to bring more "perspectives" to the writing
- if necessary, adjust sampling parameters
- llms struggle to ride this line well; they either aren't chaotic enough, creating "slop", or they're too chaotic, creating nonsense
- (when applicable) avoiding puppeting the model:
- consider using longer completion lengths to give the model more control over the final text
- split nodes on decisions that significantly change the narrative, rather than on arbitrary token boundaries
- consider creating fewer completion options to give yourself less control over the final text
- prefer steering the model over introducing more of your own text nodes into the context
- consider using longer completion lengths to give the model more control over the final text
Last active
May 31, 2025 06:59
-
-
Save transkatgirl/9d04c9d05e041f062bcb31ed5fd915e1 to your computer and use it in GitHub Desktop.
early wip
- your co-author is a (low resolution) model of the collective unconscious, trained on almost everything that has ever been written
- all art is derivative; it's not possible to make something in a vacuum. "plagiarism" is a fairly historically recent concept to come to art
- LLMs don't memorize most of their training data, they primarily learn patterns within it. you (generally) aren't directly recreating existing works, but are rather taking some inspiration from everything has ever been written via the model
- you should always credit the LLM as your co-author when sharing your works. you did not make this piece entirely by yourself, you made it in collaboration with the model.
- all art is derivative; it's not possible to make something in a vacuum. "plagiarism" is a fairly historically recent concept to come to art
- you are a co-author too, the work cannot be entirely attributed to the model
- it's not as simple as writing a text prompt; loom-like LLM interfaces require conscious decisions to be made about the progression of the work constantly. you are always in the loop, and your impact on the work cannot be ignored
- in addition, you aren't limited to just one prompt at the beginning. you should continuously add your own text into the context window in order to better guide the model when selecting completions isn't enough.
- revising the generated work afterwards can significantly elevate its quality and gives you even more control over the final document.
- it's not as simple as writing a text prompt; loom-like LLM interfaces require conscious decisions to be made about the progression of the work constantly. you are always in the loop, and your impact on the work cannot be ignored
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment