#0 0x000000000044aa2c in leveldb::Block::Iter::SeekToFirst() () #1 0x0000000000445f58 in leveldb::(anonymous namespace)::TwoLevelIterator::SeekToFirst() () #2 0x0000000000445e8a in leveldb::(anonymous namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() () #3 0x0000000000442fe0 in leveldb::(anonymous namespace)::MergingIterator::Next() () #4 0x000000000042f242 in leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) () #5 0x000000000042fa00 in leveldb::DBImpl::BackgroundCompaction() () #6 0x00000000004304fb in leveldb::DBImpl::BackgroundCall() () #7 0x000000000044c86f in leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper(void*) () #8 0x00007f4cf43e4e9a in start_thread (arg=0x7f4cf37ff700) at pthread_create.c:308 #9 0x00007f4cf4111ccd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 #10 0x0000000000000000 in ?? ()
Reply
目前我有个大数据需要用SSDB
5000万个k/v
key: 20节
value: >=2K
删除少,写入、更新、查询比较多,是用kv,还是用hashmap合适?
在配置中cache_size: 多少合适 Reply
我对你这个项目很赶兴趣. 我有2个小需求,你评估下能不能加进去ssdb.
1) 给ssdb增加数据域空间支持, 比如, 数据可以按db存在,业务上不同域的数据存在在ssdb上的不同数据库(或者什么), 像redis的select命令.
2) ssdb的配置能不能通过命令进行配置和更新,像redis的server相关命令.
静待你的答复. Reply
我想将ssdb作为
1)一个大的存储池,在ssdb与应用之间做一个代理的模块,想实现ssdb节点的动态管理;
2)把系统中多个模块的数据都放进去. 对应涉及的模块比较多(超过30个模块), 不太可能启动30个端口来解决(不好管理).
现在我使用redis, 各个模块的数据是通过key的前缀来区分, 一般命名规则 XXX_key
这样对组织数据很不好.
建议可以在建立连接时指定"库" Reply
请问你使用的数据类型只有 KV(get, set, del) 吗? 如果是的话, 可以用 map 来替换, 然后用 XXX 作为 map 的名字. Reply
用ssdb-repair 修复的时候,repair程序也挂掉了。
#0 0x000000000044aa2c in leveldb::Block::Iter::SeekToFirst() ()
#1 0x0000000000445f58 in leveldb::(anonymous namespace)::TwoLevelIterator::SeekToFirst() ()
#2 0x0000000000445e8a in leveldb::(anonymous namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#3 0x0000000000442fe0 in leveldb::(anonymous namespace)::MergingIterator::Next() ()
#4 0x000000000042f242 in leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#5 0x000000000042fa00 in leveldb::DBImpl::BackgroundCompaction() ()
#6 0x00000000004304fb in leveldb::DBImpl::BackgroundCall() ()
#7 0x000000000044c86f in leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper(void*) ()
#8 0x00007f4cf43e4e9a in start_thread (arg=0x7f4cf37ff700) at pthread_create.c:308
#9 0x00007f4cf4111ccd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#10 0x0000000000000000 in ?? () Reply
这个问题应该是硬盘问题导致leveldb数据文件损坏。
badblocks检查之后,发现有坏道。
Pass completed, 81 bad blocks found. (81/0/0 errors) Reply
这是什么原因? Reply
1. 你的环境?
2. ssdb版本?
3. 你的程序是如何使用的?
4. log.txt 在无法连接的前后, 有什么输出? Reply
操作过程:
存完数据后,用zscan查询,由于数据比较多,阻塞住了,就把ssdb kill掉了,然后重启server就出现了上述现象。
log.txt都是启动server后的正常内容。
我那台机器跑的东西比较多,机器比较卡。
ssdb是1.6.3,开启了压缩。
看现象应该是server启动后,在做某些操作,要申请内存,由于机器跑的东西多,导致比较慢。
后面又重复了几遍,还是得等server的内存涨到一定程度,client才能连。 Reply
嗯,后面都比较快了。 Reply
生成了core文件,堆栈是 :
(1):
#0 0x000000000044c6bf in leveldb::ReadBlock(leveldb::RandomAccessFile*, leveldb::ReadOptions const&, leveldb::BlockHandle const&, leveldb::BlockContents*) ()
(gdb) bt
#0 0x000000000044c6bf in leveldb::ReadBlock(leveldb::RandomAccessFile*, leveldb::ReadOptions const&, leveldb::BlockHandle const&, leveldb::BlockContents*) ()
#1 0x00000000004450fc in leveldb::Table::BlockReader(void*, leveldb::ReadOptions const&, leveldb::Slice const&) ()
#2 0x00000000004460d0 in leveldb::(anonymous namespace)::TwoLevelIterator::InitDataBlock() ()
#3 0x00000000004462fb in leveldb::(anonymous namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#4 0x000000000044635e in leveldb::(anonymous namespace)::TwoLevelIterator::Next() ()
#5 0x0000000000443460 in leveldb::(anonymous namespace)::MergingIterator::Next() ()
#6 0x000000000042f6c2 in leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#7 0x000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#8 0x000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#9 0x000000000044ccef in leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper(void*) ()
#10 0x00007f8453145e9a in start_thread (arg=0x7f843a645700) at pthread_create.c:308
#11 0x00007f8452e72ccd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#12 0x0000000000000000 in ?? ()
(2):
#0 __memcpy_ssse3_back () at ../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2065
#1 0x00007f95e29e0bc8 in std::string::append(char const*, unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2 0x000000000044a0bb in leveldb::BlockBuilder::Add(leveldb::Slice const&, leveldb::Slice const&) ()
#3 0x0000000000444579 in leveldb::TableBuilder::Add(leveldb::Slice const&, leveldb::Slice const&) ()
#4 0x000000000042f6a2 in leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#5 0x000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#6 0x000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#7 0x000000000044ccef in leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper(void*) ()
#8 0x00007f95e2219e9a in start_thread (arg=0x7f95c6ac8700) at pthread_create.c:308
#9 0x00007f95e1f46ccd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#10 0x0000000000000000 in ?? ()
(3):
#0 0x000000000045157d in UnalignedCopy64 (dst=0x7f8ebced8304, src=0x7f8e9c3efffd) at snappy-stubs-internal.h:195
#1 TryFastAppend (len=1, available=7147, ip=0x7f8e9c3efff5 <Address 0x7f8e9c3efff5 out of bounds>, this=<optimized out>) at snappy.cc:1000
#2 DecompressAllTags<snappy::SnappyArrayWriter> (writer=<synthetic pointer>, this=0x7f8ed15ce630) at snappy.cc:730
#3 InternalUncompressAllTags<snappy::SnappyArrayWriter> (decompressor=0x7f8ed15ce630, max_len=4294967295, uncompressed_len=<optimized out>, writer=<synthetic pointer>) at snappy.cc:866
#4 InternalUncompress<snappy::SnappyArrayWriter> (writer=<synthetic pointer>, r=<optimized out>, max_len=<optimized out>) at snappy.cc:850
#5 snappy::RawUncompress (compressed=<optimized out>, uncompressed=0x7f8ebced6000 "") at snappy.cc:1042
#6 0x0000000000451702 in snappy::RawUncompress (compressed=<optimized out>, n=<optimized out>, uncompressed=<optimized out>) at snappy.cc:1037
#7 0x000000000044c7ee in leveldb::ReadBlock(leveldb::RandomAccessFile*, leveldb::ReadOptions const&, leveldb::BlockHandle const&, leveldb::BlockContents*) ()
#8 0x00000000004450fc in leveldb::Table::BlockReader(void*, leveldb::ReadOptions const&, leveldb::Slice const&) ()
#9 0x00000000004460d0 in leveldb::(anonymous namespace)::TwoLevelIterator::InitDataBlock() ()
#10 0x00000000004462fb in leveldb::(anonymous namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#11 0x000000000044635e in leveldb::(anonymous namespace)::TwoLevelIterator::Next() ()
#12 0x0000000000443460 in leveldb::(anonymous namespace)::MergingIterator::Next() ()
#13 0x000000000042f6c2 in leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#14 0x000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#15 0x000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#16 0x000000000044ccef in leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper(void*) ()
#17 0x00007f8fb9c63e9a in start_thread (arg=0x7f8ed15cf700) at pthread_create.c:308
#18 0x00007f8fb9990ccd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#19 0x0000000000000000 in ?? ()
另外:启用了压缩。 Reply
我这个是日志分析的程序,大约30G的日志,分析完成后存到ssdb中。
大约有400万个zset。 Reply
每个K/v 大概5KB
这样的数量。内存和硬盘占用多少?
还有一个问题,如果查看有多个key? redis可以用dbsize,SSDB用什么命令? Reply
30M x (5K + 20 + 10) ~= 150GB
假设 key 是 20 字节, SSDB 额外占用 10 个字节. 内存占用按默认配置, 会稳定在 1 GB 左右.
SSDB 没有一个命令看数据库中有多少个 key, 如果你需要统计, 那么可以把所有的 KV 存储到一个 hashmap 中, 然后用 hsize 来获取这个 map 的 KV 个数. Reply
4个进程一块跑,用了大概4分钟。有没有加快速度的方法?
PS:下面那个多次send和recv 跟 一次send完所有数据的问题就是想加快一下速度,结果有问题。 Reply
request("zset", "h", "a", "1");
request("zset", "h", "b", "2");
request("zset", "h", "c", "3");
可以改成
std::vector<std::string> req;
req.push_back("multi_zset");
req.push_back("h");
req.push_back("a");
req.push_back("1");
req.push_back("b");
req.push_back("2");
req.push_back("c");
req.push_back("3");
request(req);
控制批量在 1000 个元素以内. Reply
ssdb使用c++接口,发送很多命令,最后一次接收,类似:
link->send();
……//多次send
link->flush();
link->recv();
发现程序卡住。
阻塞在write:
[<ffffffff81537ec9>] sk_stream_wait_memory+0x1b9/0x270
[<ffffffff81583537>] tcp_sendmsg+0x727/0xd80
[<ffffffff815a92b4>] inet_sendmsg+0x64/0xb0
[<ffffffff81529224>] do_sock_write.isra.10+0xd4/0xf0
[<ffffffff815292a3>] sock_aio_write+0x63/0x90
[<ffffffff8117723a>] do_sync_write+0xda/0x120
[<ffffffff81177bfd>] vfs_write+0x16d/0x180
[<ffffffff81177e6a>] sys_write+0x4a/0x90
[<ffffffff81661ec2>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff
怎么回事? Reply
之所以卡住, 过程是这样的: 默认TCP socket的发送和接收缓冲区大概在8k, 如果响应的数据超过16k(服务器的发送缓冲+客户端的接收缓冲), 这时, 服务器的发送缓冲和客户端的接收缓冲已满, 导致服务器不再接收, 最终也导致客户端的发送缓冲满, send完全卡住. Reply
另外GO客户端差不多完成了,新增加了支持批量模式。
https://github.com/whl739/ssdb-go