server: #ip: 127.0.0.1 port: 8888 # bind to public ip ip: 0.0.0.0 # format: allow|deny: all|ip_prefix # multiple allows or denys is supported #deny: all #allow: 127.0.0.1 #allow: 192.168
logger: level: info output: log.txt rotate: size: 1000000000
leveldb: # in MB cache_size: 500 # in KB block_size: 32 # in MB write_buffer_size: 64 # in MB compaction_speed: 1000 # yes|no compression: yes
#第二台机器 # ssdb-server config # MUST indent by TAB!
# relative to path of this file, directory must exists work_dir = ./var pidfile = ./var/ssdb.pid
server: #ip: 127.0.0.1 port: 8888 # bind to public ip ip: 0.0.0.0 # format: allow|deny: all|ip_prefix # multiple allows or denys is supported #deny: all #allow: 127.0.0.1 #allow: 192.168
你好博主,这个是我在测试时候出错的信息,当时我在压力测试给这个name hset,然后我再客户端操作hclear所致,其他操作没问题,就是客户端使用hclear 会报错,求解答 ssdb 127.0.0.1:8888> hclear service_test Traceback (most recent call last): File "./tools/../deps/cpy/cpy.py", line 65, in <module> File "/usr/local/ssdb-stable/ssdb-master/tools/_cpy_/ssdb-cli.py", line 420, in <module> num = hclear(link, args[0]) File "/usr/local/ssdb-stable/ssdb-master/tools/_cpy_/ssdb-cli.py", line 138, in hclear if ((ret – last_count)>=batch or verbose!=False and num<batch): UnboundLocalError: local variable ‘last_count’ referenced before assignment
Reply
http://wapftp.cn/phpmanual/function.strnatcmp.html
从这个link看你的hash内key的排序是“二进制安全比较字符串”排序。能否有接口设置一个hash map的排序方法,我们习惯用“自然顺序”1、2、3……这样的 Reply
work_dir = ./var
pidfile = ./var/ssdb.pid
server:
#ip: 127.0.0.1
port: 8888
# bind to public ip
ip: 0.0.0.0
# format: allow|deny: all|ip_prefix
# multiple allows or denys is supported
#deny: all
#allow: 127.0.0.1
#allow: 192.168
replication:
slaveof:
# sync|mirror, default is sync
type: mirror
ip: 10.9.20.61
port: 8888
logger:
level: info
output: log.txt
rotate:
size: 1000000000
leveldb:
# in MB
cache_size: 500
# in KB
block_size: 32
# in MB
write_buffer_size: 64
# in MB
compaction_speed: 1000
# yes|no
compression: yes
#第二台机器
# ssdb-server config
# MUST indent by TAB!
# relative to path of this file, directory must exists
work_dir = ./var
pidfile = ./var/ssdb.pid
server:
#ip: 127.0.0.1
port: 8888
# bind to public ip
ip: 0.0.0.0
# format: allow|deny: all|ip_prefix
# multiple allows or denys is supported
#deny: all
#allow: 127.0.0.1
#allow: 192.168
replication:
slaveof:
# sync|mirror, default is sync
type: mirror
ip: 10.9.20.62
port: 8888
logger:
level: info
output: log.txt
rotate:
size: 1000000000
leveldb:
# in MB
cache_size: 500
# in KB
block_size: 32
# in MB
write_buffer_size: 64
# in MB
compaction_speed: 1000
# yes|no
compression: yes Reply
2013-12-17 16:32:30.820 [INFO ] ssdb-server.cpp(134): ssdb working, links: 0
2013-12-17 16:33:30.820 [INFO ] ssdb-server.cpp(134): ssdb working, links: 0
2013-12-17 16:34:30.821 [INFO ] ssdb-server.cpp(134): ssdb working, links: 0
2013-12-17 16:36:18.568 [INFO ] ssdb-server.cpp(371): ssdb-server 1.6.6
2013-12-17 16:36:18.569 [INFO ] ssdb-server.cpp(372): conf_file : ssdb.conf
2013-12-17 16:36:18.569 [INFO ] ssdb-server.cpp(373): work_dir : ./var
2013-12-17 16:36:18.569 [INFO ] ssdb-server.cpp(374): log_level : debug
2013-12-17 16:36:18.569 [INFO ] ssdb-server.cpp(375): log_output : log.txt
2013-12-17 16:36:18.569 [INFO ] ssdb-server.cpp(376): log_rotate_size : 1000000000
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(66): main_db : ./var/data
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(67): meta_db : ./var/meta
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(68): cache_size : 500 MB
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(69): block_size : 32 KB
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(70): write_buffer : 64 MB
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(71): compaction_speed : 1000 MB/s
2013-12-17 16:36:18.569 [INFO ] ssdb.cpp(72): compression : yes
2013-12-17 16:36:18.719 [DEBUG] binlog.cpp(130): capacity: 10000000, min: 1, max: 2,
2013-12-17 16:36:18.719 [INFO ] ssdb.cpp(128): slaveof: 10.9.20.62:8888, type: mirror
2013-12-17 16:36:18.719 [DEBUG] slave.cpp(31): last_seq: 1000018, last_key:
2013-12-17 16:36:18.719 [INFO ] ssdb-server.cpp(399): server listen on: 0.0.0.0:8888
2013-12-17 16:36:18.719 [INFO ] ssdb-server.cpp(439): pidfile: ./var/ssdb.pid, pid: 30257
2013-12-17 16:36:18.719 [INFO ] ssdb-server.cpp(425): ssdb server started.
2013-12-17 16:36:18.720 [INFO ] slave.cpp(99): [0] connecting to master at 10.9.20.62:8888…
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): writer 0 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 0 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 1 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 2 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 3 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 4 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 5 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 6 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 7 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 8 init
2013-12-17 16:36:18.720 [DEBUG] serv.cpp(273): reader 9 init
2013-12-17 16:36:18.720 [INFO ] slave.cpp(118): ready to receive binlogs
2013-12-17 16:36:20.125 [DEBUG] ssdb-server.cpp(167): new link from 10.9.20.62:54450, fd: 20, link_count: 1
2013-12-17 16:36:20.125 [INFO ] backend_sync.cpp(35): fd: 20, accept sync client
2013-12-17 16:36:20.125 [INFO ] backend_sync.cpp(155): [mirror]fd: 20, sync, seq: 1000028, key: ”
2013-12-17 16:36:21.730 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:36:23.135 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:36:24.741 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:36:26.145 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:36:27.751 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:36:29.155 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:36:30.762 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:36:32.165 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:36:33.772 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:36:35.175 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
[root@rabbitmq-61 ssdb-master]#
[root@rabbitmq-61 ssdb-master]# ls
api build.sh docs Makefile src ssdb-server TODO var version
build_config.mk deps log.txt README.md ssdb.conf ssdb_slave.conf tools var_slave
[root@rabbitmq-61 ssdb-master]# cd var/data
[root@rabbitmq-61 data]# ls
000005.ldb 000006.log CURRENT LOCK LOG LOG.old MANIFEST-000004
[root@rabbitmq-61 data]# ll
total 20
-rw-r–r– 1 root root 241 Dec 17 16:36 000005.ldb
-rw-r–r– 1 root root 0 Dec 17 16:36 000006.log
-rw-r–r– 1 root root 16 Dec 17 16:36 CURRENT
-rw-r–r– 1 root root 0 Dec 17 09:28 LOCK
-rw-r–r– 1 root root 309 Dec 17 16:36 LOG
-rw-r–r– 1 root root 57 Dec 17 09:28 LOG.old
-rw-r–r– 1 root root 65536 Dec 17 16:36 MANIFEST-000004
[root@rabbitmq-61 data]# Reply
我刚刚截取了最后边的部分.
另外, 另外一台有数据的机器的情况:
[root@rabbitmq-62 data]# ls
000099.ldb 000115.ldb 000147.ldb 000190.ldb 000244.ldb 000312.ldb 000357.ldb 000388.ldb 000440.ldb 000478.ldb
000100.ldb 000116.ldb 000148.ldb 000199.ldb 000254.ldb 000315.ldb 000358.ldb 000389.ldb 000442.ldb 000485.ldb
000101.ldb 000117.ldb 000149.ldb 000201.ldb 000268.ldb 000317.ldb 000359.ldb 000400.ldb 000443.ldb 000486.ldb
000102.ldb 000118.ldb 000150.ldb 000202.ldb 000270.ldb 000319.ldb 000360.ldb 000409.ldb 000445.ldb 000487.ldb
000103.ldb 000119.ldb 000151.ldb 000204.ldb 000273.ldb 000321.ldb 000361.ldb 000410.ldb 000446.ldb 000488.ldb
000104.ldb 000120.ldb 000161.ldb 000214.ldb 000275.ldb 000323.ldb 000362.ldb 000412.ldb 000447.ldb 000489.ldb
000105.ldb 000121.ldb 000170.ldb 000228.ldb 000277.ldb 000333.ldb 000363.ldb 000413.ldb 000448.ldb 000490.ldb
000106.ldb 000122.ldb 000172.ldb 000231.ldb 000279.ldb 000347.ldb 000373.ldb 000414.ldb 000449.ldb 000491.ldb
000107.ldb 000123.ldb 000174.ldb 000233.ldb 000281.ldb 000349.ldb 000382.ldb 000415.ldb 000459.ldb 000492.ldb
000108.ldb 000133.ldb 000175.ldb 000235.ldb 000283.ldb 000350.ldb 000383.ldb 000416.ldb 000468.ldb CURRENT
000109.ldb 000142.ldb 000176.ldb 000237.ldb 000293.ldb 000352.ldb 000384.ldb 000417.ldb 000470.ldb LOCK
000110.ldb 000144.ldb 000177.ldb 000239.ldb 000307.ldb 000354.ldb 000385.ldb 000418.ldb 000472.ldb LOG
000111.ldb 000145.ldb 000178.ldb 000241.ldb 000308.ldb 000355.ldb 000386.ldb 000429.ldb 000474.ldb LOG.old
000113.ldb 000146.ldb 000179.ldb 000243.ldb 000310.ldb 000356.ldb 000387.ldb 000438.ldb 000476.log MANIFEST-000004
[root@rabbitmq-62 data]# pwd
/home/zhandulin/ssdb-master/var/data
[root@rabbitmq-62 data]#
问: 再重启后, 怎样使数据同步到已删除的节点上? Reply
1) 我是上次拿git主干(master) 上的代码.
2) 刚刚kill 掉后,删除 data目录和 meta目录,重启后不行.日志:
2013-12-17 16:42:27.087 [INFO ] ssdb-server.cpp(371): ssdb-server 1.6.6
2013-12-17 16:42:27.087 [INFO ] ssdb-server.cpp(372): conf_file : ssdb.conf
2013-12-17 16:42:27.087 [INFO ] ssdb-server.cpp(373): work_dir : ./var
2013-12-17 16:42:27.087 [INFO ] ssdb-server.cpp(374): log_level : debug
2013-12-17 16:42:27.087 [INFO ] ssdb-server.cpp(375): log_output : log.txt
2013-12-17 16:42:27.087 [INFO ] ssdb-server.cpp(376): log_rotate_size : 1000000000
2013-12-17 16:42:27.087 [INFO ] ssdb.cpp(66): main_db : ./var/data
2013-12-17 16:42:27.088 [INFO ] ssdb.cpp(67): meta_db : ./var/meta
2013-12-17 16:42:27.088 [INFO ] ssdb.cpp(68): cache_size : 500 MB
2013-12-17 16:42:27.088 [INFO ] ssdb.cpp(69): block_size : 32 KB
2013-12-17 16:42:27.088 [INFO ] ssdb.cpp(70): write_buffer : 64 MB
2013-12-17 16:42:27.088 [INFO ] ssdb.cpp(71): compaction_speed : 1000 MB/s
2013-12-17 16:42:27.088 [INFO ] ssdb.cpp(72): compression : yes
2013-12-17 16:42:27.224 [DEBUG] binlog.cpp(130): capacity: 10000000, min: 0, max: 0,
2013-12-17 16:42:27.224 [INFO ] ssdb.cpp(128): slaveof: 10.9.20.62:8888, type: mirror
2013-12-17 16:42:27.224 [DEBUG] slave.cpp(31): last_seq: 0, last_key:
2013-12-17 16:42:27.224 [INFO ] ssdb-server.cpp(399): server listen on: 0.0.0.0:8888
2013-12-17 16:42:27.224 [INFO ] ssdb-server.cpp(439): pidfile: ./var/ssdb.pid, pid: 30671
2013-12-17 16:42:27.225 [INFO ] ssdb-server.cpp(425): ssdb server started.
2013-12-17 16:42:27.225 [INFO ] slave.cpp(99): [0] connecting to master at 10.9.20.62:8888…
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): writer 0 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 0 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 1 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 2 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 3 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 4 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 5 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 6 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 7 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 8 init
2013-12-17 16:42:27.225 [DEBUG] serv.cpp(273): reader 9 init
2013-12-17 16:42:27.226 [INFO ] slave.cpp(118): ready to receive binlogs
2013-12-17 16:42:30.152 [DEBUG] ssdb-server.cpp(167): new link from 10.9.20.62:52139, fd: 20, link_count: 1
2013-12-17 16:42:30.152 [INFO ] backend_sync.cpp(35): fd: 20, accept sync client
2013-12-17 16:42:30.153 [INFO ] backend_sync.cpp(155): [mirror]fd: 20, sync, seq: 1000028, key: ”
2013-12-17 16:42:30.245 [DEBUG] slave.cpp(245): noop last_seq: 0, seq: 1000018
2013-12-17 16:42:33.163 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:33.259 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:36.173 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:36.276 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:39.183 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:39.293 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:42.193 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:42.305 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:45.203 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:45.326 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:48.213 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:48.340 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:51.223 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:51.357 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:54.233 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:54.377 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:42:57.243 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:42:57.392 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:43:00.253 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:43:00.410 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:43:03.263 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:43:03.429 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:43:06.273 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:43:06.444 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:43:09.283 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:43:09.462 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018
2013-12-17 16:43:12.293 [DEBUG] backend_sync.cpp(182): fd: 20, 1000028 noop none
2013-12-17 16:43:12.480 [DEBUG] slave.cpp(245): noop last_seq: 1000018, seq: 1000018 Reply
好的. 我试试. Reply
ssdb 127.0.0.1:8888> hclear service_test
Traceback (most recent call last):
File "./tools/../deps/cpy/cpy.py", line 65, in <module>
File "/usr/local/ssdb-stable/ssdb-master/tools/_cpy_/ssdb-cli.py", line 420, in <module>
num = hclear(link, args[0])
File "/usr/local/ssdb-stable/ssdb-master/tools/_cpy_/ssdb-cli.py", line 138, in hclear
if ((ret – last_count)>=batch or verbose!=False and num<batch):
UnboundLocalError: local variable ‘last_count’ referenced before assignment Reply
目前实际使用的内容己经1.5G了己经超过了,配置设置的500M,这是什么原因导致的?
leveldb:
# in MB
cache_size: 500
# in KB
block_size: 32
# in MB
write_buffer_size: 64
# in MB
compaction_speed: 200
# yes|no
compression: yes Reply
做了一个测试,设置的 cache_size:500, write_buffer_size: 64
用有序的key(1-20W),写入20W条记录,每条记录5K内容。
再用abtest 随机读取这20W的key,并发为100.
这时内容不继的从硬盘到内存,再经过2次返回读取以后内存达到了1.7G。
但是,等了一天,没有做任何读写,内存一直保持在1.7G,是怎么样的情况下才会释放内存?
4325 root 15 0 2231m 1.7g 1.7g S 0.0 22.5 1:06.15 /usr/local/ssdb/ssdb-server -d /usr/local/ssdb/ssdb.conf Reply
执行compact的时间太长了,如果线上升级版本的话,比较麻烦。。
有解决方法吗? Reply
额。。数据量100G,启动都快用了1个小时了。。。还没起来。 Reply
没有什么输出,就是把配置打了出来。
ssdb-cli一直连不上,将近一个小时才好。
应该是启动的时候在整理数据,但是时间太长了。。 Reply
好像不在线啊,不能发送消息。。
我的帐号是whl880920@gmail.com
每小时的日志有很多,大约5G,处理完后,存入ssdb。
存的结果都是zset.
大概的数据是这样的:
date id 操作次数 大约200W的长度。
id sub_id 操作次数 大约几百个。
配置都是默认的,没修改过。
leveldb:
# in MB
cache_size: 500
# in KB
block_size: 32
# in MB
write_buffer_size: 64
# in MB
compaction_speed: 200
# yes|no
compression: no
是的,更新很多,读很少。
每小时处理日志,把结果存起来。
1、当如hset一堆数据后,文件夹大小在1g左右,然后我get这些数据,ssdb内存就会增加。然后就长时间保持在1.1g内存下不去了。知道怎么回事吗?
2、ssdb能像redis那样增加list的操作吗,有些时候push来的更方面
3、master里面删除了将近2g的数据,同样slave已经无法获取这些数据,但是slave的data文件夹的体积却没有释放下来。只有重启才行。求其他方法。
谢谢解答! Reply
1. 如果你频繁访问数据, ssdb 会把一些数据缓存在内存中, 占用内存. 你可以把 cache_size 调小一些.
2. list 功能在考虑中.
3. ssdb 不一定立即清除被删除的数据, 它会在你以后的操作过程中, 判断时机到了再清理. 如果你在意, 可以使用 ssdb-cli 命令行连接上去, 执行 compact 命令, 清理已删除的 key. Reply
leveldb:
# in MB
cache_size: 500
# in KB
block_size: 32
# in MB
write_buffer_size: 64
# in MB
compaction_speed: 200
# yes|no
compression: yes
这是我的配置,一直都是。500MB的大小他确实会超过达到1G。这是回收不及时,导致无法回收内存了吗? Reply
我用golang api 查询一个key的值(值是1), 返回的结果是:
[ok 1
]
[not_found]
通的情况下的结果不应该是:[ok 1] 么 Reply
主从同步实现了,多主同步是要怎么搞? Reply