Created
February 21, 2023 16:50
-
-
Save GoodManWEN/47582b4890ee6298f7bd88f986bdb706 to your computer and use it in GitHub Desktop.
kvrocks_config_file_chs_translate
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
################################ TRANSLATE INFO ############################## | |
# 中英对照翻译。 | |
# 使用DEEPL翻译+人工校验使大多数句子通畅、可以理解。 | |
# 基于kvrocks 2.2.0版本默认配置文件。 | |
# 实际配置参数部分有略微修改,请勿直接复制使用。 | |
################################ GENERAL ##################################### | |
# By default kvrocks listens for connections from localhost interface. It is possible to listen to just one or multiple interfaces using the "bind" configuration directive, followed by one or more IP addresses. | |
# 默认情况下,kvrocks监听来自localhost接口的连接。可以使用 "bind "配置指令来监听一个或多个接口,后面加上一个或多个IP地址。 | |
# | |
# Examples: | |
# | |
# bind 192.168.1.100 10.0.0.1 | |
# bind 127.0.0.1 ::1 | |
bind 0.0.0.0 | |
# Unix socket. | |
# | |
# Specify the path for the unix socket that will be used to listen for incoming connections. There is no default, so kvrocks will not listen on a unix socket when not specified. | |
# 指定用于监听传入连接的unix套接字的路径。没有默认值,所以如果不指定的话,kvrocks将不会监听unix套接字。 | |
# | |
# unixsocket /tmp/kvrocks.sock | |
# unixsocketperm 777 | |
# Accept connections on the specified port, default is 6666. | |
# 接受指定端口的连接,默认为6666。 | |
port 6666 | |
# Close the connection after a client is idle for N seconds (0 to disable) | |
# 在客户闲置N秒后关闭连接(0为禁用)。 | |
timeout 0 | |
# The number of worker's threads, increase or decrease would affect the performance. | |
# 工作线程的数量,增加或减少都会影响性能。 | |
workers 12 | |
# By default, kvrocks does not run as a daemon. Use 'yes' if you need it. Note that kvrocks will write a PID file in /var/run/kvrocks.pid when daemonized | |
# 默认情况下,kvrocks不作为一个守护进程运行。如果你需要,请使用 "yes"。注意,当守护程序运行时,kvrocks会在/var/run/kvrocks.pid中写入一个PID文件。 (译注:即不接受terminal信号也不输出流至stdout) | |
daemonize no | |
# Kvrocks implements the cluster solution that is similar to the Redis cluster solution. You can get cluster information by CLUSTER NODES|SLOTS|INFO command, it also is adapted to redis-cli, redis-benchmark, Redis cluster SDK, and Redis cluster proxy. But kvrocks doesn't support communicating with each other, so you must set cluster topology by CLUSTER SETNODES|SETNODEID commands, more details: #219. | |
# Kvrocks实现的集群解决方案与Redis集群解决方案类似。你可以通过CLUSTER NODES|SLOTS|INFO命令获得集群信息,它也与redis-cli、redis-benchmark、Redis集群SDK和Redis集群代理相适应。但kvrocks不支持相互通信,所以你必须通过CLUSTER SETNODES|SETNODEID命令设置集群拓扑结构,更多细节详参#219. | |
# | |
# PLEASE NOTE: | |
# If you enable cluster, kvrocks will encode key with its slot id calculated by CRC16 and modulo 16384, encoding key with its slot id makes it efficient to migrate keys based on the slot. So if you enabled at first time, cluster mode must not be disabled after restarting, and vice versa. That is to say, data is not compatible between standalone mode with cluster mode, you must migrate data if you want to change mode, otherwise, kvrocks will make data corrupt. | |
# 如果你启用了集群,kvrocks将用CRC16和16384调制的槽位ID对密钥进行编码,用槽位ID对密钥进行编码使得基于槽位的密钥迁移更有效率。因此,如果你第一次启用了集群模式,那么在重新启动后一定不能禁用,反之亦然。也就是说,独立模式和集群模式之间的数据是不兼容的,如果你想改变模式,你必须迁移数据,否则,kvrocks会使数据损坏的。 | |
# | |
# Default: no | |
cluster-enabled no | |
# Persist the cluster nodes topology in local file($dir/nodes.conf). This configuration takes effect only if the cluster mode was enabled. | |
# 在本地文件($dir/nodes.conf)中保留集群节点的拓扑结构。该配置只有在集群模式被启用时才会生效。 | |
# | |
# If yes, it will try to load the cluster topology from the local file when starting, and dump the cluster nodes into the file if it was changed. | |
# 如果是,它将在启动时尝试从本地文件加载集群拓扑结构,如果集群节点被改变了,则将其转储到该文件中。 | |
# | |
# Default: yes | |
# persist-cluster-nodes-enabled yes | |
# Set the max number of connected clients at the same time. By default this limit is set to 10000 clients. However, if the server is not able to configure the process file limit to allow for the specified limit the max number of allowed clients is set to the current file limit | |
# 设置同时连接的客户端的最大数量。默认情况下,这个限制被设置为10000个客户端。然而,如果服务器不能配置进程文件限制,以允许指定的限制,允许的最大客户数被设置为当前的文件限制 (译注:需修改内核选项的最大文件打开数 ulimit -a) | |
# | |
# Once the limit is reached the server will close all the new connections sending an error 'max number of clients reached'. | |
# 一旦达到限制,服务器将关闭所有新的连接,并发出 "达到最大客户数 "的错误。 | |
# | |
maxclients 8192 | |
# Require clients to issue AUTH <PASSWORD> before processing any other commands. This might be useful in environments in which you do not trust others with access to the host running kvrocks. | |
# 要求客户在处理任何其他命令之前发出AUTH <PASSWORD>。 这在你不相信其他人能访问运行kvrocks的主机的环境中可能很有用。 | |
# | |
# This should stay commented out for backward compatibility and because most people do not need auth (e.g. they run their own servers). | |
# 为了向后兼容,此选项通常不打开,因为大多数人不需要授权(例如大多数人在自己的服务器上运行程序)。 | |
# | |
# Warning: since kvrocks is pretty fast an outside user can try up to 150k passwords per second against a good box. This means that you should use a very strong password otherwise it will be very easy to break. | |
# 警告:由于kvrocks的速度相当快,外部用户可以在每秒内对一个好机器尝试多达15万个密码。这意味着你应该使用一个非常强大的密码,否则将非常容易被破解。 | |
# | |
# requirepass foobared | |
# If the master is password protected (using the "masterauth" configuration directive below) it is possible to tell the slave to authenticate before starting the replication synchronization process. Otherwise, the master will refuse the slave request. | |
# 如果主站有密码保护(使用下面的 "masterauth "配置指令),就可以告诉从站在开始复制同步过程之前要进行认证。否则,主站将拒绝从站的请求。 | |
# | |
# masterauth foobared | |
# Master-Salve replication would check db name is matched. if not, the slave should refuse to sync the db from master. Don't use the default value, set the db-name to identify the cluster. | |
# 主-从复制会检查数据库名称是否匹配。如果不匹配,从机应该拒绝从主机同步数据库。不要使用默认值,新设置一个db-name来识别集群。 | |
db-name change.me.db | |
# The working directory | |
# 工作目录相关 | |
# | |
# The DB will be written inside this directory. Note that you must specify a directory here, not a file name. | |
# DB将被写入这个目录中。注意,你必须在这里指定一个目录,而不是一个文件名。 | |
dir /root/incubator-kvrocks/db | |
# You can configure where to store your server logs by the log-dir. If you don't specify one, we will use the above `dir` as our default log directory. We also can send logs to stdout/stderr is as simple as: | |
# 你可以通过log-dir来配置你的服务器日志的存储位置。如果你没有指定,我们将使用上面的`dir`作为我们的默认日志目录。我们也可以将日志发送到stdout/stderr,只需按照如下配置: | |
# | |
log-dir stdout | |
# Log level | |
# 日志级别 | |
# Possible values: info, warning, error, fatal | |
# 候选值:info, warning, error, fatal | |
# Default: info | |
# 默认:info | |
log-level info | |
# You can configure log-retention-days to control whether to enable the log cleaner and the maximum retention days that the INFO level logs will be kept. | |
# 你可以配置日志保留天数来控制是否启用日志整理工具,以及INFO级别的日志将被保留的最大保留天数。 | |
# | |
# if set to -1, that means to disable the log cleaner. | |
# if set to 0, all previous INFO level logs will be immediately removed. | |
# if set to between 0 to INT_MAX, that means it will retent latest N(log-retention-days) day logs. | |
# 如果设置为-1,这意味着禁用日志清理器。 | |
# 如果设置为0,所有以前的INFO级别的日志将被立即删除。 | |
# 如果设置为0到INT_MAX之间,这意味着它将检索最近N(log-retention-days)天的日志。 | |
# By default the log-retention-days is -1. | |
# 默认情况下,log-retention-days是-1。 | |
log-retention-days -1 | |
# When running in daemonize mode, kvrocks writes a PID file in ${CONFIG_DIR}/kvrocks.pid by default. You can specify a custom pid file location here. | |
# 在守护模式下运行时,kvrocks默认会在${CONFIG_DIR}/kvrocks.pid中写入一个PID文件。您可以在这里指定一个自定义的PID文件位置。 | |
# pidfile /var/run/kvrocks.pid | |
pidfile "" | |
# You can configure a slave instance to accept writes or not. Writing against a slave instance may be useful to store some ephemeral data (because data written on a slave will be easily deleted after resync with the master) but may also cause problems if clients are writing to it because of a misconfiguration. | |
# 你可以配置一个从属实例来接受或不接受写入。对从属实例的写入可能对存储一些短暂的数据很有用(因为写在从属实例上的数据在与主实例重新同步后很容易被删除),但如果客户因为配置错误而向其写入,也可能导致问题。 | |
slave-read-only yes | |
# The slave priority is an integer number published by Kvrocks in the INFO output. It is used by Redis Sentinel in order to select a slave to promote into a master if the master is no longer working correctly. | |
# “从机优先级”是一个整数,由 Kvrocks 在 INFO 输出中公布。它被Redis Sentinel使用,以便在主站不再正常工作时,选择一个从站晋升为主站。 | |
# | |
# A slave with a low priority number is considered better for promotion, so for instance if there are three slave with priority 10, 100, 25 Sentinel will pick the one with priority 10, that is the lowest. | |
# 优先级较低的工作节点被认为更适合晋升,所以举例来说,如果有三个优先级为10、100、25的工作节点,Sentinel会选择优先级为10的那个,它是最低的。 | |
# | |
# However a special priority of 0 marks the replica as not able to perform the role of master, so a slave with priority of 0 will never be selected by Redis Sentinel for promotion. | |
# 然而,一个特殊的优先级为0的副本标志着它不能执行主机的角色,所以优先级为0的从站将永远不会被Redis Sentinel选中进行推广。 | |
# | |
# By default the priority is 100. | |
# 默认值为100 | |
slave-priority 100 | |
# TCP listen() backlog. | |
# TCP监听积压清理 | |
# | |
# In high requests-per-second environments you need an high backlog in order to avoid slow clients connections issues. Note that the Linux kernel will silently truncate it to the value of /proc/sys/net/core/somaxconn so make sure to raise both the value of somaxconn and tcp_max_syn_backlog in order to Get the desired effect. | |
# 在高qps的场景中,你需要一个高的阈值,以避免客户连接缓慢的问题。请注意,Linux内核会默默地将其截断为/proc/sys/net/core/somaxconn的值,所以要确保同时提高somaxconn和tcp_max_syn_backlog的值,以获得预期的效果。 | |
tcp-backlog 511 | |
# If the master is an old version, it may have specified replication threads that use 'port + 1' as listening port, but in new versions, we don't use extra port to implement replication. In order to allow the new replicas to copy old masters, you should indicate that the master uses replication port or not. | |
# 如果主机是旧版本,它可能指定了复制线程,使用'port + 1'作为监听端口,但在新版本中,我们不使用额外的端口来实现复制。为了让新的副本能够复制旧的主站,你应该指明主站是否启用复制端口。 | |
# | |
# If yes, that indicates master uses replication port and replicas will connect to 'master's listening port + 1' when synchronization. If no, that indicates master doesn't use replication port and replicas will connect 'master's listening port' when synchronization. | |
# 如果是,说明主站使用复制端口,副本在同步时将连接到'主站的监听端口+1'。如果没有,说明主站不使用复制端口,副本在同步时将连接到'主站的监听端口'。 | |
master-use-repl-port no | |
# Currently, master only checks sequence number when replica asks for PSYNC, that is not enough since they may have different replication histories even the replica asking sequence is in the range of the master current WAL. | |
# 目前,主站只在副本请求PSYNC时检查序列号,这还不够,因为它们可能有不同的复制历史,即使副本请求的序列是在主站当前WAL的范围内。 | |
# | |
# We design 'Replication Sequence ID' PSYNC, we add unique replication id for every write batch (the operation of each command on the storage engine), so the combination of replication id and sequence is unique for write batch. The master can identify whether the replica has the same replication history by checking replication id and sequence. | |
# 我们设计了'复制序列ID'PSYNC,我们为每个写入批次(存储引擎上的每个命令的操作)添加唯一的复制ID,所以复制ID和序列的组合对于写入批次是唯一的。主站可以通过检查复制ID和序列来识别副本是否有相同的复制历史。 | |
# | |
# By default, it is not enabled since this stricter check may easily lead to full synchronization. | |
# 默认情况下,它没有被启用,因为这种更严格的检查可能很容易导致完全同步化。 | |
use-rsid-psync no | |
# Master-Slave replication. Use slaveof to make a kvrocks instance a copy of another kvrocks server. A few things to understand ASAP about kvrocks replication. | |
# 主-从复制。使用 slaveof 来使一个 kvrocks 实例成为另一个 kvrocks 服务器的副本。关于 kvrocks 复制,有几件事你必须了解。 | |
# | |
# 1) Kvrocks replication is asynchronous, but you can configure a master to | |
# stop accepting writes if it appears to be not connected with at least | |
# a given number of slaves. | |
# Kvrocks复制是异步的,但你可以设置当主机的副本连接数少于某阈值时停止写入。 | |
# | |
# 2) Kvrocks slaves are able to perform a partial resynchronization with the | |
# master if the replication link is lost for a relatively small amount of | |
# time. You may want to configure the replication backlog size (see the next | |
# sections of this file) with a sensible value depending on your needs. | |
# 如果复制链路丢失的时间相对较短,Kvrocks从机能够与主机进行部分重新同步。你可能希望根据你的需要, | |
# 根据你的需要将复制缓冲的大小配置一个合理的值(见本文件的下一节)。 | |
# | |
# 3) Replication is automatic and does not need user intervention. After a | |
# network partition slaves automatically try to reconnect to masters | |
# and resynchronize with them. | |
# 复制是自动的,不需要用户干预。在网络分区切断后,从属系统将自动尝试重新连接到主系统 并与它们重新同步。 | |
# | |
# slaveof <masterip> <masterport> | |
# slaveof 127.0.0.1 6379 | |
# When a slave loses its connection with the master, or when the replication is still in progress, the slave can act in two different ways: | |
# 当一个从机失去与主机的连接,或者当复制正在发生时,从机可以以两种不同的方式行事。 | |
# | |
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will | |
# still reply to client requests, possibly with out-of-date data, or the | |
# data set may just be empty if this is the first synchronization. | |
# 如果slave-serve-stale-data被设置为'yes'(默认),那么slave仍然会回复客户端的请求, | |
# 这可能会导致有更新不及时的数据,或者如果尚未进行初次同步则有可能返回空集。 | |
# | |
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with | |
# an error "SYNC with master in progress" to all kinds of commands | |
# but to INFO and SLAVEOF. | |
# 如果slave-serve-stale-data被设置为 "no", | |
# 那么slave将对所有类型的命令回复错误 "与主站同步进行中"而不是INFO或SLAVEOF。 | |
# | |
slave-serve-stale-data yes | |
# To guarantee slave's data safe and serve when it is in full synchronization state, slave still keep itself data. But this way needs to occupy much disk space, so we provide a way to reduce disk occupation, slave will delete itself entire database before fetching files from master during full synchronization. If you want to enable this way, you can set 'slave-delete-db-before-fullsync' to yes, but you must know that database will be lost if master is down during full synchronization, unless you have a backup of database. | |
# 为了保证slave的数据安全,当它处于完全同步状态时,slave仍然保留自己的数据。但这种方式需要占用大量的磁盘空间,所以我们提供了一种减少磁盘占用的方法,在完全同步时,slave会在从主站获取文件之前删除自己的整个数据库。如果你想启用这种方式,你可以把 "slave-delete-db-before-fullsync "设置为 "是",但你必须知道,如果主机在完全同步期间停机,数据库将丢失,除非你有一个数据库备份。 | |
# | |
# This option is similar redis replicas RDB diskless load option: | |
# 这个选项类似于redis replicas RDB无盘加载选项。 | |
# repl-diskless-load on-empty-db | |
# | |
# Default: no | |
# 默认:no | |
slave-empty-db-before-fullsync no | |
# If replicas need full synchronization with master, master need to create checkpoint for feeding replicas, and replicas also stage a checkpoint of the master. If we also keep the backup, it maybe occupy extra disk space. You can enable 'purge-backup-on-fullsync' if disk is not sufficient, but that may cause remote backup copy failing. | |
# 如果副本需要与主站完全同步,主站需要创建检查点来供给数据给副本,而副本也要对主站进行检查点。如果我们还保留备份,可能会占用额外的磁盘空间。如果磁盘空间不足,你可以启用 "purge-backup-on-fullsync",但这可能导致远程备份失败。 | |
# | |
# Default: no | |
# 默认:no | |
purge-backup-on-fullsync no | |
# The maximum allowed rate (in MB/s) that should be used by replication. If the rate exceeds max-replication-mb, replication will slow down. | |
# 复制应使用的最大允许速率(单位:MB/s)。如果速率超过了max-replication-mb,复制就会减慢 | |
# Default: 0 (i.e. no limit) | |
# 默认:0(即无限制) | |
max-replication-mb 0 | |
# The maximum allowed aggregated write rate of flush and compaction (in MB/s). If the rate exceeds max-io-mb, io will slow down. 0 is no limit | |
# 允许的刷新和压实的最大聚合写速率(单位:MB/s)。如果速率超过max-io-mb,io将放慢速度。0是没有限制 | |
# | |
# Default: 500 | |
max-io-mb 500 | |
# The maximum allowed space (in GB) that should be used by RocksDB. If the total size of the SST files exceeds max_allowed_space, writes to RocksDB will fail. | |
# RocksDB应该使用的最大允许空间(GB)。如果SST文件的总大小超过了max_allowed_space,向RocksDB的写入将会失败。 | |
# | |
# Please see: https://github.com/facebook/rocksdb/wiki/Managing-Disk-Space-Utilization | |
# Default: 0 (i.e. no limit) | |
# 默认:0(即无限制) | |
max-db-size 0 | |
# The maximum backup to keep, server cron would run every minutes to check the num of current backup, and purge the old backup if exceed the max backup num to keep. If max-backup-to-keep is 0, no backup would be kept. But now, we only support 0 or 1. | |
# 保留的最大备份,服务器cron将每分钟运行一次,检查当前备份的数量,如果超过保留的最大备份数量,将清除旧的备份。如果max-backup-to-keep为0,则不保留备份。此选项我们只支持0或1。 | |
max-backup-to-keep 1 | |
# The maximum hours to keep the backup. If max-backup-keep-hours is 0, wouldn't purge any backup. | |
# 保留备份的最长小时数。如果max-backup-keep-hours是0,就不会清除任何备份。 | |
# default: 1 day | |
# 默认:1天 | |
max-backup-keep-hours 24 | |
# max-bitmap-to-string-mb use to limit the max size of bitmap to string transformation(MB). | |
# max-bitmap-to-string-mb用于限制位图到字符串转换的最大尺寸(MB)。 | |
# | |
# Default: 16 | |
# 默认:16 | |
max-bitmap-to-string-mb 16 | |
################################## TLS ################################### | |
# By default, TLS/SSL is disabled, i.e. `tls-port` is set to 0. | |
# To enable it, `tls-port` can be used to define TLS-listening ports. | |
# 默认情况下,TLS/SSL被禁用,即`tls-port`被设置为0。 要启用它,`tls-port`可以用来定义TLS监听端口。 | |
# tls-port 0 | |
# Configure a X.509 certificate and private key to use for authenticating the server to connected clients, masters or cluster peers. These files should be PEM formatted. | |
# 配置一个X.509证书和私钥,用于向连接的客户、主站或集群对等体验证服务器。这些文件应该是PEM格式的。 | |
# | |
# tls-cert-file kvrocks.crt | |
# tls-key-file kvrocks.key | |
# If the key file is encrypted using a passphrase, it can be included here as well. | |
# 如果密钥文件是用口令加密的,也应一并写在这里。 | |
# | |
# tls-key-file-pass secret | |
# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL clients and peers. Kvrocks requires an explicit configuration of at least one of these, and will not implicitly use the system wide configuration. | |
# 配置CA证书包或目录以验证TLS/SSL客户和对等体。 Kvrocks要求至少明确配置其中的一个,且不会隐式调用系统默认配置。 | |
# | |
# tls-ca-cert-file ca.crt | |
# tls-ca-cert-dir /etc/ssl/certs | |
# By default, clients on a TLS port are required to authenticate using valid client side certificates. | |
# 默认情况下,TLS端口上的客户需要使用有效的客户方证书进行认证。 | |
# | |
# If "no" is specified, client certificates are not required and not accepted. If "optional" is specified, client certificates are accepted and must be valid if provided, but are not required. | |
# 如果指定为 "不",则不要求客户证书,也不接受。如果指定为 "可选",则客户端必须提供有效证书,或不提供证书,不能提供无效证书。 | |
# | |
# tls-auth-clients no | |
# tls-auth-clients optional | |
# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended that older formally deprecated versions are kept disabled to reduce the attack surface. You can explicitly specify TLS versions to support. | |
# 默认情况下,只有TLSv1.2和TLSv1.3被启用,强烈建议关闭旧的稳定版以减少攻击面。你可以明确指定要支持的TLS版本。 | |
# Allowed values are case insensitive and include "TLSv1", "TLSv1.1", "TLSv1.2", "TLSv1.3" (OpenSSL >= 1.1.1) or any combination. | |
# 默认可用下列值(或值间组合):"TLSv1", "TLSv1.1", "TLSv1.2", "TLSv1.3" (OpenSSL >= 1.1.1) | |
# To enable only TLSv1.2 and TLSv1.3, use: | |
# 要想只启用TLSv1.2和TLSv1.3,输入 | |
# | |
# tls-protocols "TLSv1.2 TLSv1.3" | |
# Configure allowed ciphers. See the ciphers(1ssl) manpage for more information about the syntax of this string. | |
# 配置允许的秘钥。 关于这个字符串的写法,请参见ciphers(1ssl)手册的更多信息。 | |
# | |
# Note: this configuration applies only to <= TLSv1.2. | |
# 注意:此配置仅适用于<= TLSv1.2 | |
# | |
# tls-ciphers DEFAULT:!MEDIUM | |
# Configure allowed TLSv1.3 ciphersuites. See the ciphers(1ssl) manpage for more information about the syntax of this string, and specifically for TLSv1.3 ciphersuites. | |
# 配置允许的 TLSv1.3 秘钥套件。 关于这个字符串的语法的更多信息,请参见ciphers(1ssl)手册,特别是关于TLSv1.3秘钥套件。 | |
# | |
# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256 | |
# When choosing a cipher, use the server's preference instead of the client preference. By default, the server follows the client's preference. | |
# 当选择一个秘钥时,使用服务器的偏好,而不是客户的偏好。默认情况下,服务器会遵循客户端的偏好。 | |
# | |
# tls-prefer-server-ciphers yes | |
# By default, TLS session caching is enabled to allow faster and less expensive reconnections by clients that support it. Use the following directive to disable caching. | |
# 默认情况下,TLS会话缓存是启用的,以允许支持它的客户更快和更便宜地重新连接。使用下面的指令来禁用缓存。 | |
# | |
# tls-session-caching no | |
# Change the default number of TLS sessions cached. A zero value sets the cache to unlimited size. The default size is 20480. | |
# 改变默认的TLS会话缓存的数量。零值将缓存设置为无限大小。默认的大小是20480。 | |
# | |
# tls-session-cache-size 5000 | |
# Change the default timeout of cached TLS sessions. The default timeout is 300 seconds. | |
# 改变缓存的TLS会话的默认超时。默认超时时间是300秒。 | |
# | |
# tls-session-cache-timeout 60 | |
################################## SLOW LOG ################################### | |
################################## 日志部分 ################################### | |
# The Kvrocks Slow Log is a mechanism to log queries that exceeded a specified execution time. The execution time does not include the I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). | |
# Kvrocks慢速日志是一种机制,用于记录超过指定执行时间的查询。执行时间不包括I/O操作,如与客户交谈、发送回复等,而只是实际执行命令所需的时间(这是命令执行的唯一阶段,线程被阻塞,在此期间不能为其他请求服务)。 | |
# | |
# You can configure the slow log with two parameters: one tells Kvrocks what is the execution time, in microseconds, to exceed in order for the command to get logged, and the other parameter is the length of the slow log. When a new command is logged the oldest one is removed from the queue of logged commands. | |
# 你可以用两个参数来配置慢速日志:一个参数告诉Kvrocks,要超过多少执行时间(以微秒为单位),命令才会被记录下来,另一个参数是慢速日志的长度。当一个新的命令被记录下来时,最旧的命令会从记录的命令队列中被删除。 | |
# The following time is expressed in microseconds, so 1000000 is equivalent to one second. Note that -1 value disables the slow log, while a value of zero forces the logging of every command. | |
# 下面的时间用微秒表示,所以1000000相当于一秒钟。请注意,-1的值会禁用慢速日志,而0的值会强制记录每个命令。 | |
slowlog-log-slower-than 1000000 | |
# There is no limit to this length. Just be aware that it will consume memory. You can reclaim memory used by the slow log with SLOWLOG RESET. | |
# 这个长度没有限制。只是要注意它将消耗内存。你可以用SLOWLOG RESET来回收慢速日志所使用的内存。 | |
slowlog-max-len 128 | |
# If you run kvrocks from upstart or systemd, kvrocks can interact with your supervision tree. Options: | |
# 如果你从upstart或systemd运行kvrocks,kvrocks可以与你的监管树互动。选项。 | |
# supervised no - no supervision interaction | |
# - 没有监督互动 | |
# supervised upstart - signal upstart by putting kvrocks into SIGSTOP mode | |
# - 通过将kvrocks放入SIGSTOP模式,发出启动信号 | |
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET | |
# - 将READY=1写入$NOTIFY_SOCKET,向systemd发出信号。 | |
# supervised auto - detect upstart or systemd method based on UPSTART_JOB or NOTIFY_SOCKET environment variables | |
# - 根据UPSTART_JOB或NOTIFY_SOCKET环境变量检测upstart或systemd方法 | |
# Note: these supervision methods only signal "process is ready." They do not enable continuous liveness pings back to your supervisor. | |
# 注意:这些监督方法只发出 "进程已准备好 "的信号。它们并不能使你的监督者获得持续的有效性平移。 | |
supervised no | |
################################## PERF LOG ################################### | |
# The Kvrocks Perf Log is a mechanism to log queries' performance context that exceeded a specified execution time. This mechanism uses rocksdb's Perf Context and IO Stats Context, Please see: | |
# Kvrocks Perf Log是一种机制,用于记录超过指定执行时间的查询的性能上下文。这个机制使用rocksdb的Perf Context和IO Stats Context,请看。 | |
# (译注:用以发现性能瓶颈、优化性能和解决崩溃) | |
# https://github.com/facebook/rocksdb/wiki/Perf-Context-and-IO-Stats-Context | |
# | |
# This mechanism is enabled when profiling-sample-commands is not empty and profiling-sample-ratio greater than 0. It is important to note that this mechanism affects performance, but it is useful for troubleshooting performance bottlenecks, so it should only be enabled when performance problems occur. | |
# 当profiling-sample-commands不为空且profiling-sample-ratio大于0时,该机制被启用。 需要注意的是,该机制会影响性能,但它对排除性能瓶颈很有用,所以只应在发生性能问题时启用。 | |
# The name of the commands you want to record. Must be original name of commands supported by Kvrocks. Use ',' to separate multiple commands and use '*' to record all commands supported by Kvrocks. | |
# 你想记录的命令的名称。必须是Kvrocks支持的命令的原始名称。使用','来分隔多个命令,使用'*'来记录Kvrocks支持的所有命令。 | |
# Example: | |
# - Single command: profiling-sample-commands get | |
# - Multiple commands: profiling-sample-commands get,mget,hget | |
# | |
# Default: empty | |
# profiling-sample-commands "" | |
# Ratio of the samples would be recorded. It is a number between 0 and 100. We simply use the rand to determine whether to record the sample or not. | |
# 样品的比率将被记录。它是一个在0到100之间的数字。我们简单地用rand来决定是否记录样本。 | |
# | |
# Default: 0 | |
profiling-sample-ratio 0 | |
# There is no limit to this length. Just be aware that it will consume memory. You can reclaim memory used by the perf log with PERFLOG RESET. | |
# 这个长度没有限制。只是要注意它将消耗内存。你可以用PERFLOG RESET来回收被perf log使用的内存。 | |
# | |
# Default: 256 | |
profiling-sample-record-max-len 256 | |
# profiling-sample-record-threshold-ms use to tell the kvrocks when to record. | |
# profiling-sample-record-threshold-ms用来告诉kvrocks何时记录。 | |
# | |
# Default: 100 millisecond | |
profiling-sample-record-threshold-ms 100 | |
################################## CRON ################################### | |
################################## 计划任务 ################################ | |
# Compact Scheduler, auto compact at schedule time. Time expression format is the same as crontab(currently only support * and int) | |
# 定时压缩工具,在计划时间自动压缩。时间表达格式与crontab相同(目前只支持*和int)。 | |
# e.g. compact-cron 0 3 * * * 0 4 * * * | |
# would compact the db at 3am and 4am everyday | |
# 例如上述设置将在每天凌晨3、4点执行压缩计划 | |
# compact-cron 0 3 * * * | |
# The hour range that compaction checker would be active | |
# 压缩度检查器的执行时间范围 | |
# e.g. compaction-checker-range 0-7 means compaction checker would be worker between | |
# 0-7am every day. | |
# 例如按照compaction-checker-range 0-7表示压缩度检查器在每天0-7点间被允许工作。 | |
compaction-checker-range 0-7 | |
# Bgsave scheduler, auto bgsave at scheduled time. Time expression format is the same as crontab(currently only support * and int) | |
# Bgsave调度器的小时范围,在预定时间自动进行Bgsave。时间表达格式与crontab相同(目前只支持*和int)。 | |
# e.g. bgsave-cron 0 3 * * * 0 4 * * * | |
# 例如上述设置将在每天凌晨3、4点执行bgsave | |
# would bgsave the db at 3am and 4am every day | |
# Command renaming. | |
# 指令重命名 | |
# | |
# It is possible to change the name of dangerous commands in a shared environment. For instance, the KEYS command may be renamed into something hard to guess so that it will still be available for internal-use tools but not available for general clients. | |
# 在一个共享环境中,改变危险命令的名称是可能的。例如,KEYS命令可以被重新命名为难以猜测的名字,这样它仍然可以用于内部使用的工具,但不能用于普通客户。 | |
# | |
# Example: | |
# | |
# rename-command KEYS b840fc02d524045429941cc15f59e41cb7be6c52 | |
# | |
# It is also possible to completely kill a command by renaming it into an empty string: | |
# 也可以通过将一个命令重命名为一个空字符串来完全杀死它。 | |
# | |
# rename-command KEYS "" | |
################################ MIGRATE ##################################### | |
################################ 迁 移 ##################################### | |
# If the network bandwidth is completely consumed by the migration task, it will affect the availability of kvrocks. To avoid this situation, migrate-speed is adopted to limit the migrating speed. Migrating speed is limited by controlling the duration between sending data, the duration is calculated by: 1000000 * migrate-pipeline-size / migrate-speed (us). Value: [0,INT_MAX], 0 means no limit | |
# 如果网络带宽完全被迁移任务所消耗,将会影响到kvrocks的可用性。为了避免这种情况,我们采用了migrate-speed来限制迁移的速度。迁移速度是通过控制发送数据之间的持续时间来限制的,持续时间的计算方法是。1000000 * migrate-pipeline-size / migrate-speed(us)。值。[0,INT_MAX], 0表示没有限制。 | |
# | |
# Default: 4096 | |
migrate-speed 4096 | |
# In order to reduce data transmission times and improve the efficiency of data migration, pipeline is adopted to send multiple data at once. Pipeline size can be set by this option. Value: [1, INT_MAX], it can't be 0 | |
# 为了减少数据传输时间,提高数据迁移的效率,采用流水线的方式一次性发送多个数据。管道的大小可以通过这个选项来设置。值。[1, INT_MAX], 不能是0 | |
# | |
# Default: 16 | |
migrate-pipeline-size 16 | |
# In order to reduce the write forbidden time during migrating slot, we will migrate the incremental data several times to reduce the amount of incremental data. Until the quantity of incremental data is reduced to a certain threshold, slot will be forbidden write. The threshold is set by this option. Value: [1, INT_MAX], it can't be 0 | |
# 为了减少迁移槽的禁写时间,我们会多次迁移增量数据以减少增量数据的数量。在增量数据的数量减少到某个阈值之前,槽位将被禁止写入。该阈值由该选项设置。值。[1, INT_MAX], 不能是0 | |
# | |
# Default: 10000 | |
migrate-sequence-gap 10000 | |
################################ ROCKSDB ##################################### | |
# Specify the capacity of metadata column family block cache. A larger block cache may make requests faster while more keys would be cached. Max Size is 200*1024. | |
# 指定元数据列族块缓存的容量。较大的块缓存可以使请求更快,同时更多的键会被缓存。最大尺寸为200*1024。 | |
# Default: 2048MB | |
rocksdb.metadata_block_cache_size 8192 | |
# Specify the capacity of subkey column family block cache. A larger block cache may make requests faster while more keys would be cached. Max Size is 200*1024. | |
# 指定子键列族块缓存的容量。较大的块缓存可以使请求更快,同时更多的键会被缓存。最大尺寸为200*1024。 | |
# Default: 2048MB | |
rocksdb.subkey_block_cache_size 8192 | |
# Metadata column family and subkey column family will share a single block cache if set 'yes'. The capacity of shared block cache is metadata_block_cache_size + subkey_block_cache_size | |
# 如果设置为 "是",元数据列族和子键列族将共享块缓存。共享区块缓存的容量是元数据区块缓存的大小+子密钥区块缓存的大小。 | |
# | |
# Default: yes | |
rocksdb.share_metadata_and_subkey_block_cache yes | |
# A global cache for table-level rows in RocksDB. If almost always point lookups, enlarging row cache may improve read performance. Otherwise, if we enlarge this value, we can lessen metadata/subkey block cache size. | |
# RocksDB中表级行的全局缓存。如果几乎都是点状查找,扩大行缓存可能会提高读取性能。如果与之相反,那么扩大这个值可以减少元数据/子键块的缓存大小。 | |
# | |
# Default: 0 (disabled) | |
rocksdb.row_cache_size 0 | |
# Number of open files that can be used by the DB. You may need to increase this if your database has a large working set. Value -1 means files opened are always kept open. You can estimate number of files based on target_file_size_base and target_file_size_multiplier for level-based compaction. For universal-style compaction, you can usually set it to -1. | |
# 可以被数据库使用的开放文件的数量。 如果你的数据库有一个大的工作集,你可能需要增加它。值-1意味着打开的文件总是保持开放。你可以根据target_file_size_base和target_file_size_multiplier来估计基于级别的压缩的文件数量。对于通用风格的压缩,你通常可以把它设置为-1。 | |
# Default: 4096 | |
rocksdb.max_open_files 4096 | |
# Amount of data to build up in memory (backed by an unsorted log | |
# on disk) before converting to a sorted on-disk file. | |
# 在转换为磁盘上的分类文件之前,在内存中建立的数据量(由磁盘上的未分类日志支持)。 | |
# | |
# Larger values increase performance, especially during bulk loads. Up to max_write_buffer_number write buffers may be held in memory at the same time, so you may wish to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened. | |
# 较大的值会提高性能,特别是在批量加载时。写入缓冲区的数量最多可达max_write_buffer_number,所以你可能希望调整这个参数来控制内存的使用。另外,更大的写缓冲区会导致在下次打开数据库时有更长的恢复时间。 | |
# | |
# Note that write_buffer_size is enforced per column family. See db_write_buffer_size for sharing memory across column families. | |
# 注意,write_buffer_size是按列族强制执行的。参见db_write_buffer_size,用于跨列族共享内存。 | |
# default is 64MB | |
rocksdb.write_buffer_size 128 | |
# Target file size for compaction, target file size for Leve N can be calculated by target_file_size_base * (target_file_size_multiplier ^ (L-1)) | |
# 用于压缩的目标文件大小,Leve N的目标文件大小可以通过target_file_size_base * (target_file_size_multiplier ^ (L-1))计算。 | |
# | |
# Default: 128MB | |
rocksdb.target_file_size_base 128 | |
# The maximum number of write buffers that are built up in memory. The default and the minimum number is 2, so that when 1 write buffer is being flushed to storage, new writes can continue to the other write buffer. If max_write_buffer_number > 3, writing will be slowed down to options.delayed_write_rate if we are writing to the last write buffer allowed. | |
# 在内存中建立的写缓冲区的最大数量。默认和最小的数量是2,这样当1个写缓冲区被刷到存储区时,新的写入可以继续到另一个写缓冲区。如果max_write_buffer_number>3,当我们正在向允许的最后一个写缓冲区写东西时,写速度将减慢到options.delayed_write_rate。 | |
rocksdb.max_write_buffer_number 4 | |
# Maximum number of concurrent background compaction jobs, submitted to the default LOW priority thread pool. | |
# 并发的后台压缩作业的最大数量,提交给默认的低优先级线程池。 | |
rocksdb.max_background_compactions 4 | |
# Maximum number of concurrent background memtable flush jobs, submitted by default to the HIGH priority thread pool. If the HIGH priority thread pool is configured to have zero threads, flush jobs will share the LOW priority thread pool with compaction jobs. | |
# 最大数量的并发后台memtable flush作业,默认提交到高优先级线程池中。如果高优先级线程池被配置为零线程,刷新作业将与压实作业共享低优先级线程池。 | |
rocksdb.max_background_flushes 4 | |
# This value represents the maximum number of threads that will concurrently perform a compaction job by breaking it into multiple, smaller ones that are run simultaneously. | |
# 这个值代表了最大的线程数,通过将一个压缩作业分解成多个较小的线程同时运行来实现并发执行。 | |
# Default: 2 (i.e. no subcompactions) | |
rocksdb.max_sub_compactions 2 | |
# In order to limit the size of WALs, RocksDB uses DBOptions::max_total_wal_size as the trigger of column family flush. Once WALs exceed this size, RocksDB will start forcing the flush of column families to allow deletion of some oldest WALs. This config can be useful when column families are updated at non-uniform frequencies. If there's no size limit, users may need to keep really old WALs when the infrequently-updated column families hasn't flushed for a while. | |
# 为了限制WALs的大小,RocksDB使用DBOptions::max_total_wal_size作为列族冲刷的触发器。一旦WALs超过这个大小,RocksDB将开始强制刷新列族,以允许删除一些最老的WALs。当列族以非均匀的频率更新时,这个配置可能很有用。如果没有大小限制,当不经常更新的列族有一段时间没有刷新时,用户可能需要保留非常老的WALs。 | |
# | |
# In kvrocks, we use multiple column families to store metadata, subkeys, etc. If users always use string type, but use list, hash and other complex data types infrequently, there will be a lot of old WALs if we don't set size limit (0 by default in rocksdb), because rocksdb will dynamically choose the WAL size limit to be [sum of all write_buffer_size * max_write_buffer_number] * 4 if set to 0. | |
# 在kvrocks中,我们使用多个列族来存储元数据、子键等。如果用户总是使用字符串类型,但不经常使用list、hash和其他复杂的数据类型,那么如果我们不设置大小限制(在rocksdb中默认为0),就会有很多旧的WAL,因为如果设置为0,rocksdb会动态选择WAL大小限制为[所有write_buffer_size的总和*max_write_buffer_number]*4。 | |
# | |
# Moreover, you should increase this value if you already set rocksdb.write_buffer_size to a big value, to avoid influencing the effect of rocksdb.write_buffer_size and rocksdb.max_write_buffer_number. | |
# 此外,如果你已经将rocksdb.write_buffer_size设置为一个大值,你应该增加这个值,以避免影响rocksdb.write_buffer_size和rocksdb.max_write_buffer_number的效果。 | |
# | |
# default is 512MB | |
rocksdb.max_total_wal_size 512 | |
# We implement the replication with rocksdb WAL, it would trigger full sync when the seq was out of range. wal_ttl_seconds and wal_size_limit_mb would affect how archived logs will be deleted. If WAL_ttl_seconds is not 0, then WAL files will be checked every WAL_ttl_seconds / 2 and those that are older than WAL_ttl_seconds will be deleted | |
# 我们用rocksdb WAL实现复制,当seq超出范围时,它会触发完全同步。wal_ttl_seconds和wal_size_limit_mb会影响存档日志的删除方式。如果WAL_ttl_seconds不是0,那么WAL文件将每隔WAL_ttl_seconds/2检查一次,那些超过WAL_ttl_seconds的将被删除。 | |
# | |
# Default: 3 Hours | |
rocksdb.wal_ttl_seconds 10800 | |
# If WAL_ttl_seconds is 0 and WAL_size_limit_MB is not 0, WAL files will be checked every 10 min and if total size is greater then WAL_size_limit_MB, they will be deleted starting with the earliest until size_limit is met. All empty files will be deleted | |
# 如果WAL_ttl_seconds是0,并且WAL_size_limit_MB不是0,WAL文件将每10分钟检查一次,如果总大小大于WAL_size_limit_MB,它们将从最早的开始删除,直到满足大小限制。所有的空文件将被删除 | |
# Default: 16GB | |
rocksdb.wal_size_limit_mb 131072 | |
# Approximate size of user data packed per block. Note that the block size specified here corresponds to uncompressed data. The actual size of the unit read from disk may be smaller if compression is enabled. | |
# 每个区块打包的用户数据的大约大小。 请注意,这里指定的块大小对应的是未压缩的数据。如果启用了压缩功能,从磁盘上读取的单元的实际大小可能更小。 | |
# | |
# Default: 4KB | |
rocksdb.block_size 32768 | |
# Indicating if we'd put index/filter blocks to the block cache | |
# 表示我们是否会把索引/过滤块放到块缓存中去 | |
# | |
# Default: no | |
rocksdb.cache_index_and_filter_blocks yes | |
# Specify the compression to use. Only compress level greater than 2 to improve performance. Accept value: "no", "snappy", "lz4", "zstd", "zlib" | |
# 指定要使用的压缩方式。只有压缩级别大于2才能提高性能。接受值。"no", "snappy", "lz4", "zstd", "zlib" | |
# default snappy | |
rocksdb.compression zstd | |
# If non-zero, we perform bigger reads when doing compaction. If you're running RocksDB on spinning disks, you should set this to at least 2MB. That way RocksDB's compaction is doing sequential instead of random reads. When non-zero, we also force new_table_reader_for_compaction_inputs to true. | |
# 如果不为零,我们在进行压缩时就会进行更大的读取。如果你在机械硬盘上运行RocksDB,你应该把它设置为至少2MB。这样,RocksDB的压缩就会进行连续的而不是随机的读取。当非零时,我们还强制new_table_reader_for_compaction_inputs为true。 | |
# | |
# Default: 2 MB | |
rocksdb.compaction_readahead_size 4194304 | |
# he limited write rate to DB if soft_pending_compaction_bytes_limit or level0_slowdown_writes_trigger is triggered. | |
# 如果Soft_pending_compaction_bytes_limit或level0_slowdown_writes_trigger被触发,则限制对DB的写速率。 | |
# If the value is 0, we will infer a value from `rater_limiter` value if it is not empty, or 16MB if `rater_limiter` is empty. Note that if users change the rate in `rate_limiter` after DB is opened, `delayed_write_rate` won't be adjusted. | |
# 如果该值为0,我们将从`rater_limiter`的值中推断出一个值,如果`rater_limiter`为空,则为16MB。注意,如果用户在DB打开后改变了`rate_limiter`中的速率,`delayed_write_rate`将不会被调整。 | |
# | |
rocksdb.delayed_write_rate 0 | |
# If enable_pipelined_write is true, separate write thread queue is maintained for WAL write and memtable write. | |
# 如果enable_pipelined_write为true,则为WAL写和memtable写保持单独的写线程队列。 | |
# | |
# Default: no | |
rocksdb.enable_pipelined_write no | |
# Soft limit on number of level-0 files. We start slowing down writes at this point. A value <0 means that no writing slow down will be triggered by number of files in level-0. | |
# 对0级文件的数量进行软限制。我们在这一点上开始减慢写入速度。一个<0的值意味着不会因为0级文件的数量而触发写入速度的降低。 | |
# | |
# Default: 20 | |
rocksdb.level0_slowdown_writes_trigger 20 | |
# Maximum number of level-0 files. We stop writes at this point. | |
# 0级文件的最大数量。 在达到时停止写入。 | |
# | |
# Default: 40 | |
rocksdb.level0_stop_writes_trigger 80 | |
# Number of files to trigger level-0 compaction. | |
# 触发0级压缩的文件数量。 | |
# | |
# Default: 4 | |
rocksdb.level0_file_num_compaction_trigger 8 | |
# if not zero, dump rocksdb.stats to LOG every stats_dump_period_sec | |
# 如果不是零,则每隔stats_dump_period_sec将rocksdb.stats转储到LOG。 | |
# | |
# Default: 0 | |
rocksdb.stats_dump_period_sec 0 | |
# if yes, the auto compaction would be disabled, but the manual compaction remain works | |
# 如果是yes,自动压实功能将被禁用,但手动压实功能仍然有效。 | |
# | |
# Default: no | |
rocksdb.disable_auto_compactions no | |
# BlobDB(key-value separation) is essentially RocksDB for large-value use cases. Since 6.18.0, The new implementation is integrated into the RocksDB core. When set, large values (blobs) are written to separate blob files, and only pointers to them are stored in SST files. This can reduce write amplification for large-value use cases at the cost of introducing a level of indirection for reads. | |
# BlobDB(键值分离)在大值场景下必须使用RocksDB。从6.18.0版本开始,新的实现被集成到RocksDB核心中。设置后,大对象(blob流)会被写入独立的blob文件,而在SST文件中只存储它们的指针。这可以减少大值用例的写入放大,代价是引入了一层读取指针。 | |
# Please see: https://github.com/facebook/rocksdb/wiki/BlobDB. | |
# | |
# Note that when enable_blob_files is set to yes, BlobDB-related configuration items will take effect. | |
# 注意,当enable_blob_files设置为yes时,BlobDB相关的配置项才会生效。 | |
# | |
# Default: no | |
rocksdb.enable_blob_files no | |
# The size of the smallest value to be stored separately in a blob file. Values which have an uncompressed size smaller than this threshold are stored alongside the keys in SST files in the usual fashion. | |
# 要单独存储在blob文件中的最小值的大小。如果未压缩数据的大小小于这个阈值,则将以通用的方式与SST文件中的键一起存储。 | |
# | |
# Default: 4096 byte, 0 means that all values are stored in blob files | |
rocksdb.min_blob_size 4096 | |
# The size limit for blob files. When writing blob files, a new file is opened once this limit is reached. | |
# blob文件的大小限制。写入blob文件时,一旦达到这个限制就会打开一个新的文件。 | |
# | |
# Default: 268435456 bytes | |
rocksdb.blob_file_size 268435456 | |
# Enables garbage collection of blobs. Valid blobs residing in blob files older than a cutoff get relocated to new files as they are encountered during compaction, which makes it possible to clean up blob files once they contain nothing but obsolete/garbage blobs. | |
# 启用对blob的垃圾收集。驻留在超过截止日期的blob文件中的有效blob会在压缩过程中被重新定位到新的文件中,这样就可以在blob文件中只包含过时/垃圾blob时进行清理。 | |
# See also rocksdb.blob_garbage_collection_age_cutoff below. | |
# 同时也请参考下面的rocksdb.blob_garbage_collection_age_cutoff。 | |
# | |
# Default: yes | |
rocksdb.enable_blob_garbage_collection yes | |
# The percentage cutoff in terms of blob file age for garbage collection. Blobs in the oldest N blob files will be relocated when encountered during compaction, where N = (garbage_collection_cutoff/100) * number_of_blob_files. Note that this value must belong to [0, 100]. | |
# 以blob文件年龄为单位的垃圾收集百分比截止值。在压缩过程中遇到最老的N个blob文件中的blob将被重新定位,其中N = (garbage_collection_cutoff/100) * number_of_blob_files。注意,这个值必须属于[0, 100]。 | |
# | |
# Default: 25 | |
rocksdb.blob_garbage_collection_age_cutoff 75 | |
# The purpose of the following three options are to dynamically adjust the upper limit of the data that each layer can store according to the size of the different layers of the LSM. Enabling this option will bring some improvements in deletion efficiency and space amplification, but it will lose a certain amount of read performance. | |
# 以下三个选项的目的是根据LSM不同层的大小动态地调整每层可以存储的数据上限。启用该选项将在删除效率和空间放大方面带来一些改进,但会损失一定的读取性能。 | |
# If you want to know more details about Levels' Target Size, you can read RocksDB wiki: | |
# 如果你想知道更多关于Levels' Target Size的细节,你可以阅读RocksDB wiki: | |
# https://github.com/facebook/rocksdb/wiki/Leveled-Compaction#levels-target-size | |
# | |
# Default: no | |
rocksdb.level_compaction_dynamic_level_bytes no | |
# The total file size of level-1 sst. | |
# 1级sst的总文件大小。 | |
# | |
# Default: 268435456 bytes (256MB) | |
rocksdb.max_bytes_for_level_base 268435456 | |
# Multiplication factor for the total file size of L(n+1) layers. This option is a double type number in RocksDB, but kvrocks is not support the double data type number yet, so we use integer number instead of double currently. | |
# L(n+1)层的总文件大小的乘法系数。这个选项在RocksDB中是一个双精度浮点类型的数字,但是kvrocks还不支持双精度浮点类型的数字,所以我们目前使用整数而不是双精度浮点类型。 | |
# | |
# Default: 10 | |
rocksdb.max_bytes_for_level_multiplier 10 | |
# This feature only takes effect in Iterators and MultiGet. If yes, RocksDB will try to read asynchronously and in parallel as much as possible to hide IO latency. In iterators, it will prefetch data asynchronously in the background for each file being iterated on. In MultiGet, it will read the necessary data blocks from those files in parallel as much as possible. | |
# 这个功能只在Iterators和MultiGet中生效。如果是的话,RocksDB会尽可能地以异步和并行的方式读取,以隐藏IO延迟。在迭代器中,它将在后台为每个被迭代的文件异步地预取数据。 在MultiGet中,它将尽可能地从这些文件中并行地读取必要的数据块。 | |
# Default no | |
rocksdb.read_options.async_io no | |
# If yes, the write will be flushed from the operating system buffer cache before the write is considered complete. If this flag is enabled, writes will be slower. If this flag is disabled, and the machine crashes, some recent rites may be lost. Note that if it is just the process that crashes (i.e., the machine does not reboot), no writes will be lost even if sync==false. | |
# 如果是,在写被认为完成之前,写将从操作系统的缓冲区缓存中被刷新。如果这个标志被启用,写入的速度会比较慢。如果这个标志被禁用,而机器崩溃了,一些最近的仪式可能会丢失。 注意,如果只是进程崩溃(即机器没有重启),即使sync==false,也不会丢失任何写入。 | |
# | |
# Default: no | |
rocksdb.write_options.sync yes | |
# If yes, writes will not first go to the write ahead log, and the write may get lost after a crash. | |
# 如果是的话,写入的内容不会首先进入超前写入的日志,而且写入的内容可能在崩溃后丢失。 | |
# | |
# Deafult: no | |
rocksdb.write_options.disable_wal no | |
# If enabled and we need to wait or sleep for the write request, fails immediately. | |
# 如果启用,当我们需要等待或睡眠以等待写请求时立即返回失败。 | |
# | |
# Default: no | |
rocksdb.write_options.no_slowdown no | |
# If enabled, write requests are of lower priority if compaction is behind. In this case, no_slowdown = true, the request will be canceled immediately. Otherwise, it will be slowed down. The slowdown value is determined by RocksDB to guarantee it introduces minimum impacts to high priority writes. | |
# 如果启用,如果压缩进程落后,写请求的优先级就会降低。在这种情况下,no_slowdown = true,请求将被立即取消。否则,它将被减慢。减速值由RocksDB决定,以保证它对高优先级写的影响最小。 | |
# | |
# Default: no | |
rocksdb.write_options.low_pri no | |
# If enabled, this writebatch will maintain the last insert positions of each memtable as hints in concurrent write. It can improve write performance in concurrent writes if keys in one writebatch are sequential. | |
# 如果启用,在并发写时,这个写批将保持每个memtable的最后插入位置作为提示。如果一个写批中的键是连续的,它可以提高并发写的性能。 | |
# | |
# Default: no | |
rocksdb.write_options.memtable_insert_hint_per_batch no | |
################################ NAMESPACE ##################################### | |
# namespace.test change.me |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment