Kakoune is a modal editor. That means when you’re editing text, you’re constantly interacting with the editor using a powerful language of keystrokes — moving around, selecting text, transforming content, all through short, expressive commands typed directly in the editing interface. This is the core editing experience, and it’s what gives Kakoune its speed and fluidity.
But this isn’t the only way to interact with Kakoune.
There’s another side to Kakoune — a more structural, programmable one — which lets you interact with the editor through commands. These commands can be used to configure Kakoune, automate behavior, and define complex workflows. Commands can be written in configuration files, invoked via custom keybindings, or typed directly into the command prompt.
This document focuses on that second side: the command interface. We will explore how commands work, how they are built, and how Kakoune interprets them. We’ll define the basic building blocks — strings, words, parsing, expansions — and build up to understand how Kakoune processes scripts and configurations written in what is often called kakscript.
Kakscript, however, is often misunderstood — and poorly documented. Many users trying to extend or customize Kakoune bump into it and come away frustrated. A common sentiment you’ll hear goes like this:
“Scripting kakoune uses shell scripting but it’s far even worse, you descend into some weird un-debuggable eldritch horror mess of nested blocks of sh and eval mixing shell script semantics with kakoune editor state semantics.” — Hacker News
It’s true that kakscript can feel rough and cryptic at first, especially as it blends shell syntax with Kakoune semantics, often through layers of eval
. But once its core ideas click, a certain elegance begins to emerge — one rooted not in power, but in clarity and simplicity. Kakoune doesn’t aim to be a general-purpose programming language. Instead, it embraces a glue language paradigm, prioritizing close integration with the surrounding ecosystem over internal complexity.
Kakoune, in that sense, shares something with the Acme editor from Plan 9. As Russ Cox puts it:
“Acme is an integrating environment.”
Kakoune too is an integrating environment — one that’s particularly rewarding for tinkerers. It’s designed to compose well with other tools, often favoring external communication over internal abstraction.
The goal of this document is to knock the rough edges off kakscript, and help you get up and running with customizing and extending Kakoune. It’s written from the perspective of someone learning the language while documenting it — with clarity and practical use in mind, rather than theoretical completeness.
Let’s begin with the most fundamental concept behind all of this: commands.
Commands are the core mechanism by which users interact with Kakoune. They are built into the editor and provide access to actions like setting options, executing keys, loading scripts, or printing messages.
These native commands are defined in Kakoune’s source code. Some common examples include:
set-option
— change editor optionssource
— load another script fileexecute-keys
— simulate key inputecho
— print a message in the status barevaluate-commands
— run a string of commandstry
— catch errorsset-register
— modify register values
These native commands are the foundation upon which configuration and scripting in Kakoune are built.
Commands can be entered in several ways:
- Via the command prompt inside Kakoune, triggered by typing
:
. This is the interactive way to run commands. - From a kakscript file, which is a plain text file containing commands. These are run sequentially when the file is sourced or at startup.
- As arguments to the
kak
command-line tool, which can send commands to a running Kakoune session (via its client-server model). This is not the focus here but allows external processes to control Kakoune.
Note that this article is not about explaining what the native commands do. It is about defining what they are, and how they are parsed. If you want to know more about the meaning of each command, you should read
:doc commands
.
One of the native commands available is define-command
. It lets users declare their own commands by giving them a name and a body (made of other commands). These user commands can then be called like any other.
This is a key mechanism for extending Kakoune. It allows the user to organize repetitive tasks, compose actions, or expose new editor functionality.
For example:
define-command hello -docstring "Say hello" %{
echo "Hello, world"
}
Now :hello
is a valid command in the editor.
You see in that example that we have function-like scope syntax. However — spoiler alert — this syntax has nothing to do with functions and is just for defining strings. But before diving those concepts, let’s step back and talk about kakscript files.
You may often hear the term kakscript when browsing Kakoune forums or plugin repositories — but interestingly, this is not an official term. If you search the entire Kakoune source code for the word "kakscript", you’ll get zero matches. It’s a concept coined by the community, not by the editor itself.
Still, the word makes sense. According to Wikipedia:
In computing, a script is a relatively short and simple set of instructions that typically automate an otherwise manual process. The act of writing a script is called scripting. A scripting language or script language is a programming language that is used for scripting.
By this definition, Kakoune’s configuration files qualify: they’re plain text files that contain a sequence of editor commands, executed top to bottom, usually to automate setup or behavior. This is why the community calls it kakscript — shorthand for “the scripting language of Kakoune”, even though it is simply a stream of commands.
That said, most scripting languages include control flow features like if
statements, loops, and sometimes even functions. Kakscript, by contrast, offers no control flow at all. Its simplicity lies in being a command-based language, driven by a pipeline of instructions and augmented with expansions — lightweight string substitutions for values like options or selections.
Here’s a small example:
set-option global indentwidth 4
set-option global autocomplete prompt|insert
add-highlighter global/ wrap
colorscheme desert
Each line is a Kakoune command. When this script is parsed, the commands are evaluated one after the other. This resembles how a shell evaluates .bashrc
, but with far fewer moving parts.
Kakscript may be minimal, but it’s powerful enough to customize the editor and build real plugins — especially when paired with shell integration.
We have seen that a kakscript is a file containing commands. Kakoune has a startup mechanism that will load some initial kakscript files.
This startup behavior is built around two important concepts: the runtime directory and the user configuration directory. Understanding these is key to customizing your environment and avoiding common pitfalls.
Kakoune uses an internal notion of a runtime directory, which serves as the base path in which should reside a file named kakrc
.
- By default, the runtime directory is computed relative to the Kakoune binary itself:
<path_to_kak_binary>/../share/kak/
- You can override this path using the
KAKOUNE_RUNTIME
environment variable. - From within Kakoune, the effective runtime directory is available as
%val{runtime}
, and in shell contexts as$kak_runtime
.
When Kakoune starts — unless invoked with the -n
flag — it looks for this file named kakrc
inside that runtime directory and evaluates it. This is the only entrypoint of the configuration. See that as if it was the main
function of your favourite programming language.
This file plays a central role in setting up the editor: it loads the standard library, defines autoload behavior, and ultimately delegates to the user’s configuration file.
If you override the runtime directory, or run Kakoune with -n
, the default kakrc
is skipped entirely. As a result, none of the standard Kakoune tools, colorschemes, filetype detection or syntax highlighting will run unless you reimplement them yourself.
Kakoune also exposes a user configuration directory:
- By default, this is located at
$XDG_CONFIG_HOME/kak/
(typically$HOME/.config/kak/
). - It can be changed with the
KAKOUNE_CONFIG_DIR
environment variable. - It is accessible in Kakoune as
%val{config}
and in shell contexts as$kak_config
.
Unlike the runtime directory, this path is not used directly by Kakoune’s core code. Instead, it is referenced from within the default kakrc
, meaning it only comes into play if that file is sourced — which again, requires not using -n
and not overriding the runtime directory.
To understand the full loading behavior, let’s take a look at an extract of the default kakrc
file packaged in Kakoune and provided in the default runtime directory:
evaluate-commands %sh{
autoload_directory() {
find -L "$1" -type f -name '*\.kak' \
| sed 's/.*/try %{ source "&" } catch %{ echo -debug Autoload: could not load "&" }/'
}
if [ -d "${kak_config}/autoload" ]; then
autoload_directory ${kak_config}/autoload
elif [ -d "${kak_runtime}/autoload" ]; then
autoload_directory ${kak_runtime}/autoload
fi
if [ -f "${kak_runtime}/kakrc.local" ]; then
echo "source '${kak_runtime}/kakrc.local'"
fi
if [ -f "${kak_config}/kakrc" ]; then
echo "source '${kak_config}/kakrc'"
fi
}
Let’s unpack this step-by-step.
A shell function named autoload_directory
is defined to recursively load all .kak
files within a directory using Kakoune’s source
command.
The logic then checks:
- If the user has an autoload directory at
${kak_config}/autoload
, it will load all.kak
files found there. - If not, it falls back to
${kak_runtime}/autoload
.
This mutually exclusive logic is crucial: if you create your own autoload directory in your user config, the one in the runtime directory will be ignored. That means standard scripts, filetype definitions, and editor utilities will not be loaded unless you explicitly include them again.
This is where many users get tripped up. Creating your own autoload directory is convenient — you can drop in your custom scripts without needing to source them manually — but it completely disables the autoloading of the standard library unless you take extra steps.
To keep using your own autoload
directory and benefit from the runtime-provided tools, you can create a symbolic link to the runtime autoload. One way of doing that is directly from your configuration to ensure that this is properly setup when you start Kakoune:
nop %sh{
mkdir -p "$kak_config/autoload"
ln -s "$kak_runtime/autoload" "$kak_config/autoload/stdlib"
}
This way, your custom structure coexists with the standard library.
After handling autoloads, the default kakrc
attempts to source:
- A local file at
${kak_runtime}/kakrc.local
, which can be used to patch or extend runtime logic. - Your personal configuration file at
${kak_config}/kakrc
— typically located at$HOME/.config/kak/kakrc
.
If you’re using Kakoune without overriding the runtime or config directories, the default setup just works: your $HOME/.config/kak/kakrc
is sourced, standard tools are loaded, and filetype detection is enabled.
However, as soon as you:
- Use the
-n
flag, - Set a custom
KAKOUNE_RUNTIME
, - Or create your own
autoload
directory in$HOME/.config/kak
,
…you are responsible for replicating any part of the loading logic you still want, such as loading the standard library or sourcing your kakrc
.
Understanding the interplay between the runtime and config directories — and how the default kakrc
ties them together — is essential for building a reliable and customizable Kakoune setup.
Now that kakscript is well defined, let’s deep dive the structure of commands.
Whenever a user writes a command in Kakoune — whether directly in the command prompt or inside a kakscript file — they are writing strings that describe one or more commands. Kakoune doesn’t execute these strings directly. Instead, it first parses them to identify individual commands, then breaks each command into words before executing them in order.
For example, take this string:
echo "hello"; exec l
This is a single string, but it contains two commands: echo "hello"
and exec l
. Kakoune uses semicolons (;
) or newlines to separate these into individual commands during parsing. Once separated, each command is then parsed into words.
Let’s now look at how a single command is broken down:
add-highlighter global/ number-lines -separator ' ' -relative
add-highlighter
— the command nameglobal/
— the first positional argument (the target group)number-lines
— the second positional argument (the highlighter type)-separator
— a flag' '
— the value passed to the-separator
flag-relative
— another flag, which takes no value
This two-step parsing process is always at work:
- Command separation — The parser splits the string into one or more commands, using semicolons or newlines.
- Word splitting — Each command is then split into words, following Kakoune’s quoting and escaping rules.
This layered model allows you to write multiple commands in a single string (or file) and know that Kakoune will interpret and run them in sequence.
In the next section, we’ll look more closely at how strings are parsed and what rules Kakoune follows to make that happen.
As we’ve seen, Kakoune first splits strings into individual commands using semicolons (;
) or newlines. That part is relatively simple, and we won’t go into more detail here.
Instead, this section focuses on what happens next: how a single command is parsed into words.
This is a more subtle and essential part of the parsing process. Before Kakoune can execute a command, it must break it down into individual words — the command name, its arguments, its flags — and it follows specific rules to do so. Understanding these rules is key to writing robust kakscript and to composing dynamic commands correctly.
Let’s take a closer look at how that works.
When splitting a string into words, Kakoune needs to figure out where one word ends and the next begins.
The first thing Kakoune looks at is whitespace: spaces, tabs, and newlines normally act as word boundaries. So:
echo foo bar
is three words: echo
, foo
, and bar
.
But parsing isn’t just about splitting on spaces. Kakoune also supports quoting, which changes how whitespace is interpreted — and whether special parts of the string are expanded.
For example:
echo "foo bar"
Here, "foo bar"
is treated as a single word — the quotes prevent the space from splitting it.
Some kinds of quotes also support nesting. This means that Kakoune will continue parsing inside the quoted string, looking for other quoted substrings to interpret and potentially expand.
To go further, some quoted strings are more than just quotes — they also indicate expansions. These are strings that Kakoune will replace in a post-parsing step. But more on that later.
To fully understand how parsing works — how quoting affects word splitting, when nesting happens, and how expansions are resolved — we need to look at the different quoting styles more closely.
But to easily grasp the various quoting styles and their behaviors, we will begin with an intermediate step to define certain shapes or patterns that will help us later on.
Strings in Kakoune have different shapes, depending on their syntax. There are two main categories:
- Basic strings, which use familiar quoting characters to delimit the string:
- No quoting character
- Single quote (
'
) - Double quote (
"
)
- Percent-strings (%-strings), which start with a
%
and follow a more structured format.
A %-string begins with a percent sign (%
), optionally followed by a type (a sequence of alphabetical characters), then by a quoting character, and finally the string content. It ends with a matching closing quoting character.
The quoting character determines how the boundaries of the string are recognized, and falls into two categories:
- Nestable punctuation characters: such as
{
,(
,[
,<
. - Non-nestable punctuation characters: such as
|
,$
,'
,/
, etc.
Also, depending on whether a type is present, we distinguish:
- A raw %-string, if no type is given
- An expansion, if a type is provided
In this sense, an expansion is a specialized %-string with a declared type.
Here are a few examples:
%|raw string|
— raw, with a non-nestable quoting character%{raw string}
— raw, with a nestable quoting character%val{bufname}
— expansion of typeval
, with a nestable quoting character%opt|some-option|
— expansion of typeopt
, with a non-nestable quoting character
Next, we’ll see how each of these shapes maps to a specific kind of string, which will determine its behavior in parsing.
While the shape of a string is defined by its syntax, the kind of a string determines how Kakoune parses it.
There are four possible kinds of strings:
- Unquoted
- Double-quoted
- Quoted
- Balanced
Each shape maps to one of these kinds as follows.
An unquoted kind is a basic string without any quoting character. By definition, it cannot start with '
, "
or %
, or otherwise would be considered as another kind of quoting.
Example:
foobar
→ unquoted
A double-quoted kind is:
- a basic string using
"
as its quoting character - or a %-string expansion of type
exp
, whatever the quoting character
We did not explain in details how expansions behave but for now, you can just keep in mind that those two forms are equivalent.
Examples:
"foo bar"
→ double-quoted%exp|foo bar|
→ double-quoted (expansion of typeexp
, double-quoted kind)%exp{foo bar}
→ double-quoted (expansion of typeexp
, double-quoted kind)
A quoted kind is:
- a basic string using
'
as its quoting character - or a %-string using a non-nestable quoting character
Examples:
'foo bar'
→ quoted%|foo bar|
→ quoted%opt|my-option|
→ quoted (expansion of typeopt
, quoted kind)
A balanced kind is a %-string that uses a nestable quoting character.
Examples:
%{foo bar}
→ balanced%val{bufname}
→ balanced (expansion of typeval
, balanced kind)%opt[my-option]
→ balanced (expansion of typeopt
, balanced kind)
From now on, you should be able to quickly glance at Kakoune commands and identify the shapes without thinking too much about them, and know which kinds they correspond to. This concept of shape as an intermediate step was just to simplify the understanding. But what matters are those 4 different kinds, and how Kakoune behaves when it encounters them.
We can also notice that expansions don’t form a separate kind — they simply inherit the kind from their shape. In order to clarify that, and before going fully into details about quoting kinds, let’s tackle what expansions are in the next section.
So far, we’ve been digging into how Kakoune parses strings into words. Along the way, we’ve also mentioned expansions. So what are expansions, exactly?
Well, expansions are not a separate kind of string. They don’t define a new quoting kind on their own. Instead, they are simply a quoted or balanced string that happens to be shaped as a %-string, and that includes a type.
So even with expansions, when Kakoune is parsing, it just looks at the kind of the string. Whether it’s quoted, or balanced.
For example:
echo %val{bufname}
At first glance, Kakoune just sees a balanced string (because of the {}
quoting characters) that defines a word. If we remove the type from this string, we get:
echo %{bufname}
and Kakoune will parse this as:
echo
— first wordbufname
— second word
But with the val
type added, Kakoune recognizes that the balanced string is an expansion. So once the quoted text (bufname
) is parsed, Kakoune uses the type (val
) to know that it should substitute that text with the name of the current buffer.
If the current buffer is named file.txt
, we end up with:
echo
— first wordfile.txt
— second word
What Kakoune replaces the text with depends entirely on the type of the expansion. A few more examples:
%val{runtime}
→ replaced by the path to the Kakoune runtime directory%reg{c}
→ replaced by the contents of registerc
%sh{date}
→ replaced by the result of running the shell commanddate
We’ll explore the powerful
%sh{}
form of expansion in a dedicated section later.
There are many more. If you want to explore them all, check out :doc expansions
from inside Kakoune.
The two key things to remember are:
- Expansions are a post-parsing step: Kakoune parses them as ordinary quoted or balanced strings first — then substitutes them based on the type.
- They’re still just strings: When it comes to parsing, Kakoune simply applies the normal rules of the kind (quoted or balanced) that was detected.
With this concept behind us, it is time to dig how Kakoune parses each kind of strings.
When parsing a string into words, Kakoune relies on quoting styles to determine how text is grouped. Each quote defines a single word, possibly with nested parts inside. In this section, we’ll explore how each kind of string behaves during parsing.
When a piece of text does not start with a recognized quoting style — meaning it doesn’t begin with '
, "
or %
— Kakoune treats it as an unquoted string.
Unquoted strings are parsed until the first whitespace or newline character, which marks the boundary between words. This means that unquoted strings cannot contain unescaped spaces.
However, Kakoune does support escaping within unquoted strings using the backslash (\
). This allows whitespace characters to be included:
echo foo\ bar
is parsed as:
echo
— first wordfoo bar
— second word
Additionally, since the presence of '
, "
or %
at the start of a word would trigger a different quoting style, these characters can also be escaped to appear literally at the beginning:
echo \'hey you'
is parsed as:
echo
— first word'hey
— second wordyou'
— third word
Lastly, unquoted strings do not support nesting. Any quoting-style text inside them is treated literally and not parsed further:
echo hey%{you}
results in:
echo
— first wordhey%{you}
— second word
The %{you}
part is not recognized as a quoted string, because it appears inside an unquoted one.
Among all quoting kinds, double-quoted strings is the only one that supports nesting.
This means that Kakoune will continue parsing inside the quoted string, looking for nested quoted substrings to interpret and possibly expand. However, only %-strings (like %{...}
or %val{...}
) are parsed this way — simple quotes like 'foo'
inside a double-quoted string are not treated specially.
Let’s look at a basic example:
echo "hey %{you}"
Here's what happens:
- The outer level is the quoted string
"hey %{you}"
, so Kakoune sees this as a single word. - Inside that, it detects the balanced form
%{you}
— a nested %-string. - That nested string is parsed on its own and substituted (if applicable).
The result will be:
echo
— first wordhey you
— second word
A more real-world example:
echo -debug "content of registry x: %reg{x}"
This parses to:
echo
— the command-debug
— a flagcontent of registry x: bla
— a single word, where%reg{x}
has been expanded (assuming registerx
containsbla
)
However, if you embed a single-quoted substring inside a double-quoted string, Kakoune does not treat it specially:
echo "ignore this 'quoted block'"
Here, the entire string remains untouched and Kakoune will print ignore this 'quoted block'
.
Double-quoted strings support a few forms of escaping:
To include a "
character inside the string, you simply double it:
echo "He said ""hello"""
This becomes:
echo
— first wordHe said "hello"
— second word
Similarly, if you want to escape a nested %-string block, you can double the %
as well.
echo "how %%{are you}"
will give:
echo
— first wordhow %{are you}
— second word
Quoted strings do not support nesting. Their content is taken literally.
The only escaping available is to escape the quoting character itself by doubling it.
Examples:
echo 'my name is ''john'''
→ prints: my name is 'john'
echo %|the pipe character is |||
→ prints: the pipe character is |
Note that if the quoted string is a %-string which has a valid type, it will be considered as an expansion which will happen after parsing. So the following will print the name of the current buffer:
echo %val|bufname|
Balanced strings do not support nesting.
They are delimited by matching opening and closing characters like {}
, []
, or ()
, and must be properly balanced inside — meaning every opening character must have a matching closing one.
The word will be ended at the closing quoting character. So let’s check at an example that can be tricky to understand:
echo %{content}suffix
Here the result of parsing is:
echo
— first wordcontent
— second wordsuffix
— third word
So even if there was no space between %{content}
and suffix
, Kakoune still parsed them as different words as it encountered the closing }
delimiter which marks the end of the current word.
No escaping is possible inside balanced strings: everything inside is taken literally.
Example:
echo %{a { b } c}
This will simply print a { b } c
. The inner { b }
is balanced, so the whole %{...}
is valid.
Balanced strings can be expansions themselves which are going to be evaluated post-parsing:
echo %opt{indentwidth}
will replace the text indentwidth
with the actual value of the option.
Now let’s look at a more advanced example that reveals a crucial concept and that will help introducing our next section:
eval %sh{
printf "echo current buffer: %s\n" %val{bufname}
}
This works — but why?
We said earlier that no nesting happens inside balanced strings like %sh{}
, and yet here %val{bufname}
is being expanded. What’s going on?
The answer is one of the most important insights in Kakoune scripting:
Some commands trigger a new parsing context when they execute.
Parsing doesn’t just happen when the user types a command or when Kakoune loads a config file. Certain commands — such as eval
— re-parse their input at runtime. That’s why time becomes an important factor in Kakoune scripting.
Let’s unpack it step by step:
-
Kakoune parses the outer command:
eval %sh{ printf "echo current buffer: %s\n" %val{bufname} }
%sh{...}
is a balanced string, so%val{bufname}
is not parsed or expanded. It’s treated as literal text and passed to the shell. -
The shell runs:
printf "echo current buffer: %s\n" %val{bufname}
Which prints the literal line:
echo current buffer: %val{bufname}
-
That output is passed to
eval
, which triggers a new parsing context. -
Now,
%val{bufname}
is parsed and expanded — this time during the evaluation of eval — and becomes something like:echo current buffer: main.kak
This "double pass" evaluation is why scripting in Kakoune can feel mysterious — and powerful. Understanding that parsing happens both when commands are parsed and when certain commands execute is the aha moment.
The most common case of parsing at runtime is when a user-defined command is executed. Its body is not parsed when the command is defined — only when it is called. It’s as if there were an implicit eval
wrapped around the command body.
define-command printme -params 1 %{
echo "%arg{1}"
}
In this example, the string echo "%arg{1}"
is untouched when define-command
is parsed. It is just part of the string body. Only when printme foo
is executed does Kakoune parse that body as a new command string. Here it sees the second word being "%arg{1}"
, which is a double-quoted string. Nesting happens so in turn, it will parse %arg{1}
and resolve the expansion to foo
. Finally, Kakoune will run the resulting command echo "foo"
which happens to print foo
.
The eval
command makes this behavior explicit: it takes a string and parses it as a fresh command string at runtime. For example:
define-command printme -params 1 %{ eval %sh{
printf 'echo %%arg{1}\n'
}}
When this user-defined command is called — for example with printme hello
— Kakoune begins by parsing and executing the body string of printme
. The result of the parsing of this body will be:
- First word:
eval
- Second word is a balanced shell expansion which gets executed and becomes the string:
echo %arg{1}
At that moment, eval
runs, receiving as argument the string echo %arg{1}
. Here, it starts its own body parsing. And the %arg{1}
is expanded. The result of the parsing of the body is:
echo
hello
So the eval
will actually run the echo
command that will print hello
.
The important thing to note here is the layered parsing loop. If this example still bugs you, reread it until you fully graps the sequencing. It is important to have this process in mind.
But eval
it’s not alone — several other commands do the same kind of deferred parsing at runtime for some of their arguments:
execute-keys
try
/catch
on-key
menu
prompt
map
These all accept strings that are parsed as commands only when the right event happens — an error is thrown, a key is pressed, a choice is selected, and so on.
When you come across an expansion block inside a string, it’s important to ask: when will this string be parsed as a command?
To figure that out:
- Check whether the expansion is part of a quoted string.
- Look at the command that receives the string. If it’s one of the commands listed above, then parsing will happens when the parent command runs.
This is a key skill when reading or debugging Kakoune scripts. It helps explain why some expansions behave differently depending on where they appear, and why certain bugs only show up when a command is executed — not when it’s parsed.
We can prevent eval
from parsing or expanding its own body at runtime by using the -verbatim
flag. This tells Kakoune to skip word splitting and expansion entirely.
Let’s take the same example as before, but adding -verbatim
:
define-command printme -params 1 %{ eval -verbatim %sh{
printf 'echo %%arg{1}\n'
}}
When calling printme hello
, here’s what happens:
- As a user command has been called, parsing happens with its string body definition.
- Here, the result of that initial parsing identifies three arguments, the last one being a shell that directly gets expanded:
eval
-verbatim
echo %arg{1}
- Now, Kakoune will execute the
eval
with the given arguments. It understands that it should run in verbatim mode. - Parsing and expansion of its own body it fully skipped.
- So
eval
receive a single string as command to run:echo %arg{1}
. - This does not correspond to an existing command, so Kakoune returns the error:
'eval' no such command
.
Now that we accumulated a lot of knowledge about parsing and expansions, let’s continue with some quizzes!
Let’s test your understanding. For each example below, try to figure out:
- When parsing happens
- When expansions happen
- What gets executed
Think carefully before reading the answers — these cases are common sources of confusion in kakscript.
eval %{
echo %val{bufname}
}
At initial parsing, the body of eval
is seen as a single string because of the balanced quotes: echo %val{bufname}
.
When eval
executes, it parses this string into two words: echo
and %val{bufname}
— the latter, which is itself unquoted gets expanded at that moment.
Result: the current buffer name is echoed.
eval -verbatim %{
echo %val{bufname}
}
The body is passed as a single unprocessed string: echo %val{bufname}
.
Because -verbatim
disables further parsing, eval
attempts to run the entire string as a command name.
Result: parsing fails — echo %val{bufname}
is not a valid command.
eval -verbatim "echo %val{bufname}"
Here, at initial parsing, the last argument of the eval
is a double-quoted string. This means expansions will happen before the resulting string is assigned as the body of the eval
.
Suppose your buffer is named hello.kak
, then eval
receives: echo hello.kak
as a single word. But since -verbatim
disables re-parsing, eval
again tries to run this as one command.
Result: command not found.
eval -verbatim echo %val{bufname}
There are no quotes, so initial parsing splits into words: echo
and the expanded buffer name (e.g. hello.kak
). It means that eval
will receive two body arguments, which is allowed and will be used as two words when evaluating.
Then eval -verbatim
simply forwards those already-parsed arguments to the command runner.
Result: the buffer name is echoed as expected.
reg c '%val{bufname}'
The entire string is single-quoted, so no expansion happens.
Result: the literal string %val{bufname}
is stored into register c
.
Here, we assess that register c
has the value of the last quizz.
eval -verbatim echo %reg{c}
At initial parsing, it splits last arguments into two words: echo
and %reg{c}
. As %reg{c}
is unquoted, so it’s expanded immediately — the value from register c
is substituted.
Then eval -verbatim
runs without reparsing/expanding, with already-split arguments: echo
and %val{bufname}
(the register’s content).
Result: the literal value %val{bufname}
is echoed.
We still assess that register c
has the literal value %val{bufname}
.
eval echo %reg{c}
At initial parsing, it again splits last arguments into two words: echo
and %reg{c}
. As unquoted, the register c
is expanded (same as previous quizz), producing the literal %val{bufname}
.
But here, no -verbatim
— so eval
re-parses its arguments when it runs. So as the second argument %val{bufname}
is unquoted, it will be expanded again (e.g. hello.kak
).
Result: the string hello.kak
is echoed.
define-command test %{ %sh{
printf 'echo -debug one\n'
printf 'echo -debug two\n'
}}
Let’s try to understand what happens when test
is called.
At initial parsing, the body of the test
command is assigned with the full literal string (containing the %sh{...}
block), as it is enclosed between balanced quotes.
When test
is run, and as it is a user command, Kakoune will parse and expand its body. So the shell expansion will happen.
However, this shell expansion is also a type of balanced string. So the full string resulting of the shell execution will be evaluated as a single command.
Result: command not found error.
define-command test %{ eval %sh{
printf 'echo -debug one\n'
printf 'echo -debug two\n'
}}
Let’s try to understand again what happens when test
is called.
Same as previous quizz, the string body of the test
command is assigned with the full literal string (containing the eval %sh{...}
block), as it is enclosed between balanced quotes.
When test
is run, and as it is a user command, Kakoune will parse and expand its body. There will be two resulting words, the first one being eval
and the second being the result of the shell expansion as a single string.
So here, Kakoune will run the eval
, which in turn will trigger a parsing of its body. So those two echo
commands that are part of a single string will now be parsed into actual commands and be normally executed.
Result: the *debug*
buffer will show two additional lines being one
and two
.
define-command test %sh{
printf "echo '%s'\n" "$(date)"
}
Same as before, let’s try to understand again what happens when test
is called.
This %sh{}
block is expanded at initial parsing, when the command is defined, not when it is called.
The result of date is frozen in the expansion and baked into the command.
Result: same date value printed every time you call test.
Note that if we wanted to have the date evaluated each time the command is run, you should enclose the body in a balanced string, and use an eval
:
define-command test %{ eval %sh{
printf "echo '%s'\n" "$(date)"
}}
Now that we have a clear idea on how Kakoune parses and evaluates, we will focus back specifically on the shell expansions as they have their own quirks, and knowing about them will save you time.
During the parsing process, when encountering a %sh{...}
expansion, Kakoune gathers the content of this balanced string and feeds it as a script to a shell process. By default, the shell used is /bin/sh
, but this can be overridden using the KAKOUNE_POSIX_SHELL
environment variable. The result of executing the shell script becomes the string Kakoune sees. Once executed, the result is treated similarly to a static balanced string.
For example, with the command:
echo %sh{ date }
Kakoune will parse the command and, upon encountering the second word %sh{ date }
, it will execute date
in a new /bin/sh
process. Suppose the shell returns Tue 3 Jun 23:25:51 CEST 2025
. Then, from Kakoune’s point of view, it’s as if we had written:
echo %{ Tue 3 Jun 23:25:51 CEST 2025 }
This form of expansion is powerful, but, as mentioned earlier, each %sh{}
expansion spawns a new shell process. Therefore, it’s important to avoid overusing it, especially in performance-critical contexts.
Now that we understand how shell expansions bring dynamism to Kakoune, it’s important to note that they would be almost useless without access to Kakoune’s context.
When spawning the shell, Kakoune provides a set of environment variables representing its current internal state. These can be accessed and manipulated from within the shell script itself. They are, in effect, the shell equivalents of Kakoune’s regular expansions, though their syntax differs slightly. These variables typically begin with the prefix $kak_
.
Here are a few examples:
%arg{n}
becomes$n
%opt{x}
becomes$kak_opt_x
%reg{x}
becomes$kak_reg_x
The goal here isn’t to list them all. Kakoune provides a complete reference — see :doc expansions
for the full list of available variables.
What’s crucial to understand is that not all variables are provided during each shell expansion. Exporting every possible variable would be far too costly. Instead, Kakoune only exports the environment variables that are explicitly mentioned in the block’s text or in command-line arguments.
For example:
echo %sh{ env | grep kak_ }
…will print nothing.
But:
echo %sh{ env | grep kak_ # kak_client }
…will print the name of the current client — since kak_client
appears in the block (even in a shell comment).
Similarly:
def -params 1 checkvars %{ echo %sh{ env | grep kak_ } }
checkvars kak_client
Here, since kak_client
is passed as a parameter, Kakoune recognizes it as a known variable and exposes it to the shell.
The key consequence of this behavior is that you cannot dynamically construct a Kakoune environment variable name within a shell expansion. Kakoune won’t recognize it unless it appears statically in the block. In such cases, you’ll need to rely on hacks like including it in a shell comment to force its export.
To recap: so far, we’ve seen how Kakoune communicates with the shell via environment variables. But what about the reverse direction — how can the shell instruct Kakoune?
The answer is simple: the shell just needs to emit string commands, which Kakoune will then parse and execute at runtime, often using the eval
command. This is something we’ve touched on in the earlier parts of the article.
That said, Kakoune also supports more advanced mechanisms for communicating back:
- Shell scripts can write to FIFO files to talk back to Kakoune asynchronously.
- The shell can also invoke the
kak
command-line tool with the-p
flag, providing the current session ID, to imperatively send commands to a live Kakoune session.
However, this article focuses more on parsing and evaluation, so we won’t dive into FIFO or kak -p
usage here.
Instead, let’s go a bit deeper into the injection of shell-derived state into Kakoune, and address a topic that’s both essential and infamous: shell quoting — everyone’s favorite pain point!
Let’s step away from Kakoune scripting for a moment to explore how quoting, splitting, and evaluation work in POSIX shell. This detour is crucial: while Kakoune’s scripting has its own model for splitting and evaluation, the shell behaves differently — especially when it comes to iteration over strings. Since Kakoune often calls out to shell scripts and expects them to interoperate correctly, understanding the shell’s model will help make sense of patterns commonly used in the Kakoune community.
If you want an in-depth reference, the behavior described here follows the POSIX shell specification, particularly sections on Field Splitting.
The shell processes a command in stages. First, it tokenizes the command line into words. Then, within each word, it performs parameter expansion, command substitution, and arithmetic expansion. After those expansions — and only if the word was not double-quoted — the shell performs field splitting, breaking the expanded result into multiple words using characters from the IFS
variable (by default: space, tab, and newline).
Here’s a simple example to illustrate this mechanism:
var="hey you"
echo $var
The $var
is expanded into hey you
, and since it’s unquoted, the shell applies field splitting — resulting in two separate arguments: hey
and you
.
Now compare that with:
var="hey you"
echo "$var"
Here, the expansion is protected by double quotes, so no field splitting happens. echo
receives a single argument: hey you
.
Let’s move now to an example which involves iteration. Suppose we want to iterate over a set of values in a shell environment that lacks native list or array types, such as POSIX shell. The simplest approach looks like this:
items="first second third"
for item in $items; do
echo "$item"
done
Here, $items
is expanded without quotes, so the shell performs field splitting — giving us the desired iteration over three separate items.
This works well — until your items contain spaces. Effectively, the space
character is part of the IFS
variable, which is used to identify delimiters at field splitting.
To avoid breaking on spaces, one workaround is to change IFS
to a different separator:
IFS=:
items="first item:second item:third item"
for item in $items; do
echo "$item"
done
This solves the issue, but modifying IFS
can lead to side effects, since the shell uses it extensively under the hood. And then, you also need to care about escaping that delimiter character if it is present in your items. So it’s common to prefer other techniques.
Another approach is to reassign the shell’s positional parameters using set
. This is a popular trick in the Kakoune community:
items="first second third"
set -- $items
for item in "$@"; do
echo "$item"
done
This works just like our earlier for
loop, but uses the shell’s internal argument list. The key here is again not quoting $items
so that field splitting happens and multiple words are assigned to $1
, $2
, etc.
However, if any item has spaces, we find ourselves back to square one since they will be split into additional positional arguments.
If you read carefully, you should have noticed that we use the special variable
$@
to reference positional arguments in thefor
loop, and we have wrapped the variable in double quotes. You might think that this would also prevent field splitting during iteration, but this is intentional. Effectively,$@
is a special variable that receives distinct treatment when wrapped in double quotes, as explained in the Special Parameters section of the specification:"When the expansion occurs within double-quotes, and where field splitting is performed, each positional parameter shall expand as a separate field"
In other words, even when enclosed in double quotes,
$@
undergoes field splitting. However, this process is not governed byIFS
; instead, it is split at each argument while respecting whitespace, if present.
The eval
builtin concatenates its arguments into a single string, then reparses and executes it.
So the final pattern — robust and safe for whitespace — combines eval
with careful quoting.
items="'first item' 'second item' 'third item'"
eval set -- "$items"
for item in "$@"; do
echo "$item"
done
Let’s unpack this.
- We quote each item with single quotes inside the
items
string. - Then we expand
$items
inside double quotes — so no field splitting occurs at this stage. - The result of the expansion is passed to
eval
.
The eval
command works as follows:
- It takes three arguments:
set
,--
, and'first item' 'second item' 'third item'
. - It concatenates these arguments with spaces, resulting in the string
set -- 'first item' 'second item' 'third item'
. - It then reparses the concatenated string, considering single quotes for tokenization.
- Finally, it executes
set
with the arguments--
,first item
,second item
, andthird item
.
This last step effectively assigns the items to positional arguments, even when they contain spaces.
This layered parsing and re-parsing is a bit of a dance — but once understood, it becomes a powerful idiom for shell scripting when iterating over items that contain spaces.
Before transitioning back to Kakoune, let’s explore the outcome of using the same example as before, but without double-quoting $items
. Additionally, let’s increase the spacing within our last item.
items="'first item' 'second item' 'long item'"
eval set -- $items
for item in "$@"; do
echo "$item"
done
Before reading the explanation, I recommend attempting to guess what will happen. Given the information provided earlier, you should be able to understand the parsing mechanics and identify the outcome.
Let’s analyze the situation.
- The variable
$items
is expanded in theeval set
, but due to the absence of wrapping double quotes, field splitting occurs at spaces. - It is important to note that single quotes are not treated as proper quoting during field splitting; instead, they are merely data, functioning as actual characters within the
items
variable. - As a result,
eval
receives the following arguments:set
,--
,'first
,item'
,'second
,item'
,'long
,item'
. This indicates that something is going wrong.
We will now examine the function of eval
:
- It concatenates all provided arguments, joining them with spaces to form the string
set -- 'first item' 'second item' 'long item'
. - You should have observed that we lost the extra spaces we intentionally included in our last item.
- Subsequently,
eval
reparses the above string, taking single quotes into account for tokenization. - Finally, it executes
set
with the arguments--
,first item
,second item
, andlong item
.
It appears that it almost works. However, it has the unintended side effect of only allowing single spaces within items, rather than multiple spaces. Therefore, it is more effective to enclose our $items
in double quotes.
eval set -- "$items"
Let’s return to Kakoune shell expansions and apply the pattern we just learned.
Kakoune provides a few shell expansions that behave like lists, such as selections or registers — which can contain multiple values. These are often worth iterating over in scripts.
Suppose we write something like:
eval %sh{
for item in $kak_reg_x; do
printf 'echo -debug "%s"\n' "$item"
done
}
We immediately run into the same issue we discussed earlier: if any item from our x
register contains whitespace, this loop won’t behave as expected.
So, let’s try applying our earlier trick using eval set
:
eval %sh{
eval set -- "$kak_reg_x"
for item in "$@"; do
printf 'echo -debug "%s"\n' "$item"
done
}
And here comes the surprise... it still doesn’t work.
Why? Because for eval set -- "$kak_reg_x"
to work properly, the contents of $kak_reg_x
need to have each individual item already quoted — and by default, they are not. These $kak_*
variables are not inherently safe or ready for list iteration when items contain whitespace.
Fortunately, Kakoune provides a built-in solution: the $kak_quoted_*
variables. These versions of the standard environment variables include proper single-quoting for each value, making them safe to re-evaluate.
So the robust and elegant solution becomes:
eval %sh{
eval set -- "$kak_quoted_reg_x"
for item in "$@"; do
printf 'echo -debug "%s"\n' "$item"
done
}
If you browse the Kakoune standard library or community plugins, you’ll often spot this exact pattern. It’s a reliable idiom for working with multi-value expansions in Kakoune scripting
I know this has been a lot to take in — but there’s one last topic I want to touch on before we move on.
As we’ve seen, many plugins need to send commands back to Kakoune from the shell. When these commands include dynamic values — especially those containing whitespace or quote characters — it’s important to quote them correctly, using Kakoune’s expected syntax.
Let’s start with a simple example to set the context:
eval %sh{
printf 'echo "%s"\n' "$dynamic"
}
Here, after the echo
, we wrap the value of the $dynamic
variable in double quotes. But what happens if $dynamic
itself contains a double quote ("
) character? It will break the quoting in the Kakoune command.
To avoid this, we might think: "Let’s use single quotes instead":
eval %sh{
printf "echo '%s'\n" "$dynamic"
}
This still doesn’t solve the problem. What now if the dynamic value contains a single quote?
So now we’re asking: Is there a safe, general-purpose quoting strategy we can use in all cases?
Yes — but it helps to understand why.
To handle unknown or unpredictable dynamic values (which might include quotes or whitespace), we need a quoting mechanism that supports escaping. That way, if the value contains a quote character, we can escape it safely.
Let’s go through our options:
- Balanced
%{}
strings — These look nice and are easy to generate from the shell. But they don’t support escaping. If your string contains an unmatched{
, it will break — and there’s no way to recover. - Double-quoted strings — These support escaping, which is good. But they also support expansions, which can be dangerous and surprising.
- Single-quoted strings — These support escaping and don’t perform expansions, making them much safer.
We have a winner: single-quoted strings are the most robust and predictable choice for our use-case.
You could also use
%|...|
(or other non-nesting delimiter forms), which behave like single quotes, but they’re more complex to generate from the shell, as they need both a%
and a non-nesting delimiter — so we’ll stick with single quotes here.
Even with single quotes, there’s still one last issue: what if your dynamic value itself contains a single quote ('
)? You can’t just drop that into single quotes without escaping it.
The proper escape mechanism is to double the single quotes inside the string. For example:
- Input:
Don't Let's Start
- Escaped:
'Don''t Let''s Start'
This escaping logic is a bit tedious to implement by hand, so here’s a small POSIX-compliant function that does it for you:
kakquote() {
set -- "$*" ""
while [ "${1#*\'}" != "$1" ]; do
set -- "${1#*\'}" "$2${1%%\'*}''"
done
printf "'%s' " "$2$1"
}
This function ensures that your string is safely wrapped in single quotes and that any inner single quotes are properly escaped.
We won’t go into the details of how it works — feel free to explore this discussion thread for more insights — but know that it’s robust and safe to use when quoting complex or unknown values in Kakoune scripts.
You don’t need to use kakquote
everywhere. If you’re sure your variable doesn’t contain any problematic characters, keep it simple. In most cases, direct quoting is enough.
But it’s good to have this function in your toolbox — for those rare but tricky cases where quoting goes wrong and you’re not sure why.
Now that we have the core parsing mechanics in mind, let’s move on to some arbitrary tips and patterns that can help you simplify your kakscript and make it more readable.
Kakoune gives you several tools to store and reuse state: options with different levels of scopes, and also registers. You can think of these as variables. They’re especially helpful when you want to avoid deeply nested eval
calls and shell blocks that quickly become unreadable.
Let’s take an example: suppose you want to focus the client which currently has a buffer open matching a given path.
A naive approach would be to directly nest shell and eval
blocks:
def focus-path -params 1 %{ eval %sh{
for client in ${kak_client_list}; do
printf 'eval -client %s %%{
eval %%sh{
if [ "$kak_bufname" = "%s" ]; then
printf focus
else
printf nop
fi
}
}\n' "$client" "$1"
done
}}
This code works, but it’s deeply nested:
eval
of the outer loop with a corresponding shell block- inside,
eval -client
to execute a command for each client - inside that, another
eval
and shell block which conditionally prints focus
This is hard to reason about. You have to mentally simulate multiple levels of parsing and quoting over time. This is where many Kakoune users get frustrated.
Instead of evaluating logic on the fly, we can collect all the relevant state first, and then process it in a flat loop. This greatly reduces nesting.
Here’s a more readable alternative using register c
to store state with tuples of client and bufname.
def focus-path -params 1 %{ eval -save-regs c %{
eval -client '*' 'reg c %reg{c} %val{client} %val{bufname}'
eval %sh{
path="$1"
eval set -- "$kak_quoted_reg_c"
while [ $# -gt 0 ]; do
client=$1
buffer=$2
if [ "$path" = "$buffer" ]; then
printf 'eval -client %s focus\n' "$client"
fi
shift 2
done
}
}}
Let’s break down the logic:
So here, the first eval
uses the recently introduced -client '*'
— which evaluates commands on all clients — to append each client and its corresponding buffer to the c
register. Note that a register resembles a list, and can contain multiple values.
Then we iterate over the register in shell, two elements at a time.
We perform our comparison and emit a command if needed — no extra levels of eval or quoting.
- Less nesting: We only have two blocks with clear roles (one to collect, and then one shell expansion to process).
- Easier to debug: You can inspect
reg c
to see what state is being passed. - Separation of concerns: Gathering and filtering logic are cleanly separated.
You can try generalize this pattern:
- Use Kakoune to collect scoped state into a register
- Use shell logic to process it
This avoids Kakoune’s quoting rules from interfering with logic, and makes your code much easier to extend.
When you use %sh{}
blocks in Kakoune, you get a set of useful environment variables automatically injected into the shell:
- Positional parameters like
$1
,$2
, etc., for function parameters. - Contextual values like
$kak_client
,$kak_bufname
, or$kak_opt_<option>
, mirroring what you might get via%val{}
,%opt{}
, etc.
These are great when you need to dynamically process those values inside the shell script.
But if you’re only passing values through — for example, inserting them into a Kakoune command without modifying them — then it’s often better to skip the shell variable altogether and use Kakoune expansions (like %arg{1}
or %opt{foo}
) directly in the output string. This avoids quoting issues and makes your intent clearer.
define-command foobar -params 1 %{ eval %sh{
printf "echo '%s'\n" "$1"
}}
If you try it with foobar hello
, you see hello
echoed.
This works — until it doesn’t. If the user runs:
foobar "hey i'am good"
The underlying shell output will be:
echo 'hey i'am good'
So here, in terms of parsing, Kakoune will consider what is enclosed in the two first single quotes as being the first argument of echo
, then the second one will be am
and the third one will be good'
(including the ending quote as there is no matching one). When calling the echo
command in Kakoune, it will print its arguments separated by spaces.
So the resulted printed message will be hey i am good'
. So the quote feels misplaced. And yes, we are definitely in a quoting hell here!
define-command foobar -params 1 %{ eval %sh{
printf 'echo "%%arg{1}"\n'
}}
Here, %arg{1}
is not expanded in the shell block. It is emitted as a literal string in the output. Once the shell block finishes, Kakoune receives:
echo "%arg{1}"
…and then it performs the expansion itself, after parsing. No quoting issues.
- Use shell variables (
$1
,$kak_opt_*
) when you need to do something with the values inside the shell: testing them, transforming them, or using them in shell logic. - Use Kakoune expansions (
%arg{}
,%opt{}
) when you’re just assembling a Kakoune command and don’t need to inspect or modify the value in the shell.
This isn’t just about escaping: it’s also about clarity of intent. When you use %opt{foo}
inside a shell block, it’s obvious to readers that the shell won’t touch it — the value is meant to be handled by Kakoune after the shell step.
We’ve touched on this before, but it’s worth stating clearly: shell expansions (%sh{}
) always spawn a subprocess. If used carelessly — especially in hooks or frequently triggered commands — this can result in unnecessary overhead, with multiple shell processes running in the background.
Sometimes, what you’re trying to achieve with a shell can be done directly through Kakoune’s built-in commands. Not only is this more efficient, but it often leads to more expressive and readable code.
Take the following example:
nop %sh{
printf "trusted" > "$kak_opt_trust_file"
}
This simply writes the string trusted
into a file, the path to which is stored in the trust_file
option. But we can do this using pure Kakoune syntax:
echo -to-file %opt{trust_file} trusted
This is shorter, clearer, and shell-free. It avoids spawning a shell entirely and makes your intent explicit.
The takeaway here is simple:
Before reaching for
%sh{}
, ask yourself if there’s a native Kakoune command that can do the job.
It also extends to conditional logic. With a bit of creativity, you can often express if/else
-style branching using pure Kakoune syntax — avoiding the shell entirely.
Here’s a neat example using what’s sometimes called the "lambda calculus" trick:
define-command -params 2 if-true %{ eval %arg{1} }
define-command -params 2 if-false %{ eval %arg{2} }
"if-%opt{autospell_enabled}" autospell-disable autospell-enable
Let’s break that down:
if-true
andif-false
are two generic branching commands that evaluate their first or second argument, respectively.- The trick comes from naming the commands in a way that matches the value of an option — e.g., if
autospell_enabled
isfalse
, the expansion becomesif-false
.
This dispatches logic based on an option’s value, without any shell expansion.
Here’s another variation using try
/catch
to test whether an option holds a valid Kakoune command or not:
declare-option str focusmode
set-option global focusmode nop
define-command load-ui %{
try %{
eval %opt{focusmode}
# focus mode is disabled
try %{add-highlighter global/ number-lines}
} catch %{
# focus mode is enabled
try %{remove-highlighter global/number-lines}
}
# shared logic
set global indentwidth 4
}
Here, calling load-ui
with nop
(a real Kakoune command which does nothing) results in successful evaluation, executing the try
block. If the value isn’t a valid command, eval
fails, and the catch
block runs instead.
These patterns can be extremely helpful for plugin authors who want to branch logic without invoking a subshell.
You can find more information about this branching example on this Kakoune forum thread.
These approaches might seem obscure at first, but they illustrate a deeper point:
The more Kakoune-native your logic becomes, the less you’ll need the shell.
That often means:
- Getting comfortable with command flags.
- Understanding how to use options, registers, and expansions effectively.
- Reading plugins from other Kakoune users to discover useful tricks.
- Reading the
:doc
pages regularly — the more you absorb, the easier it becomes to spot shell-free alternatives.
Over time, as you develop plugins or build more complex configurations, you’ll start to see these patterns more clearly. You’ll write less shell, your code will run faster, and your scripts will be easier to debug and reason about.
Kakoune’s command language may seem simple on the surface, but writing reliable scripts requires a clear understanding of how it parses, expands, and evaluates commands — both statically and dynamically. By mastering these mechanics and adopting simplification patterns, you’ll write cleaner, more robust, and less fragile Kakoune scripts.