ZFS 12*SATA JBOD vs MSA 2312FC 24*SAS

简介:
今天拿了两台主机PK一下zfs和存储的性能.
ZFS主机
联想 Reno/Raleigh
8核 Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz
24GB内存
12*SATA 2TB, 其中2块RAID1, 另外10块作为zpool (raidz 9 + spare 1 + raid1的一个分区作为log)
文件系统特殊项atime=off, compression=lz4 压缩比 约3.16

存储主机
DELL R610
16核 Intel(R) Xeon(R) CPU           E5630  @ 2.53GHz
32GB内存
存储2台MSA2312FC, 分别12块300G SAS盘. 10块做的RAID 10. 2块hot spare.

其中一台存储的配置
Controllers
-----------
Controller ID: A
Serial Number: 3CL947R707
Hardware Version: 56
CPLD Version: 8
Disks: 12
Vdisks: 1
Cache Memory Size (MB): 1024
Host Ports: 2
Disk Channels: 2
Disk Bus Type: SAS
Status: Running
Failed Over: No
Fail Over Reason: Not applicable

# show disks
Location Serial Number         Vendor   Rev  How Used   Type   Size    
  Rate(Gb/s)  SP Status     
-----------------------------------------------------------------------
1.1      3QP2EN7V00009006CJT4  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.2      3QP232CM00009952PTMA  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.3      3QP2GKLZ00009008V1VA  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.4      3QP2G2LL00009008WAYU  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.5      3QP2EN0700009007DAPA  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.6      3QP2G6AE00009008V39E  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.7      6SJ4ZX1S0000N239DF6Q  SEAGATE  0008 GLOBAL SP  SAS    300.0GB 
  3.0            OK         
1.8      6SJ4ZTLT0000N239FM3P  SEAGATE  0008 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.9      6SJ4ZY6H0000N2407XKP  SEAGATE  0008 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.10     3QP2FVR100009008Z16H  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.11     3QP2DZEX00009008WBN9  SEAGATE  0004 VDISK VRSC SAS    300.0GB 
  3.0            OK         
1.12     3QP2CXWS00009008WBLN  SEAGATE  0004 GLOBAL SP  SAS    300.0GB 
  3.0            OK         
-----------------------------------------------------------------------

Name Size     Free    Own Pref   RAID   Disks Spr Chk  Status Jobs      
  Serial Number                    
------------------------------------------------------------------------
vd01 1498.4GB 100.4GB A   A      RAID10 10    0   16k  FTOL   VRSC 66%  
  00c0ffda61090000144de15100000000

# show cache
System Cache Parameters
-----------------------
Operation Mode: Active-Active ULP

  Controller A Cache Parameters
  -----------------------------
  Write Back Status: Enabled
  CompactFlash Status: Installed
  Cache Flush: Enabled

  Controller B Cache Parameters
  -----------------------------
  Write Back Status: Enabled
  CompactFlash Status: Installed
  Cache Flush: Enabled

存储文件系统
[root@db- ~]# lvs
  LV   VG       Attr   LSize    Origin Snap%  Move Log Copy%  Convert
  lv01 vgdata01 -wi-ao  300.00G                                      
  lv02 vgdata01 -wi-ao  100.00G                                      
  lv03 vgdata01 -wi-ao    1.17T                                      
  lv04 vgdata01 -wi-ao 1001.99G                                      
[root@db- ~]# pvs
  PV                            VG       Fmt  Attr PSize PFree
  /dev/mpath/d09_msa1_vd01vol01 vgdata01 lvm2 a--  1.27T    0 
  /dev/mpath/d09_msa2_vd01vol01 vgdata01 lvm2 a--  1.27T    0 
使用ext4, noatime, nodiratime加载.

测试场景是PostgreSQL 9.2.8
目前只测试了读速度, 因为zfs这台是流复制备机. (数据库配置完全一致)
zfs下18G表的COUNT查询
digoal=> select count(*) from tbl;
  count   
----------
 48391818
(1 row)
Time: 9998.065 ms

存储下的查询
digoal=> select count(*) from tbl;
  count   
----------
 48391818
(1 row)
Time: 64707.770 ms

这个测试数据在ZFS中LZ4压缩算法后缩小了2.5倍左右.
pg_relation_filepath               
-------------------------------------------------
 pg_tblspc/16384/PG_9.2_201204301/70815/10088356

> ll -h pg_tblspc/16384/PG_9.2_201204301/70815/10088356*
-rw------- 1 postgres postgres 1.0G Jun 19 00:59 pg_tblspc/16384/PG_9.2_201204301/70815/10088356
-rw------- 1 postgres postgres 1.0G Jun 19 04:24 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.1
-rw------- 1 postgres postgres 1.0G Jun 19 01:21 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.10
-rw------- 1 postgres postgres 1.0G Jun 19 05:15 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.11
-rw------- 1 postgres postgres 1.0G Jun 19 04:52 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.12
-rw------- 1 postgres postgres 1.0G Jun 19 01:50 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.13
-rw------- 1 postgres postgres 1.0G Jun 19 04:22 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.14
-rw------- 1 postgres postgres 1.0G Jun 19 03:32 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.15
-rw------- 1 postgres postgres 1.0G Jun 19 02:05 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.16
-rw------- 1 postgres postgres 575M Jun 19 04:26 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.17
-rw------- 1 postgres postgres 1.0G Jun 19 04:30 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.2
-rw------- 1 postgres postgres 1.0G Jun 19 01:27 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.3
-rw------- 1 postgres postgres 1.0G Jun 19 03:24 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.4
-rw------- 1 postgres postgres 1.0G Jun 19 00:52 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.5
-rw------- 1 postgres postgres 1.0G Jun 19 03:39 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.6
-rw------- 1 postgres postgres 1.0G Jun 19 04:53 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.7
-rw------- 1 postgres postgres 1.0G Jun 19 00:49 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.8
-rw------- 1 postgres postgres 1.0G Jun 19 05:14 pg_tblspc/16384/PG_9.2_201204301/70815/10088356.9
-rw------- 1 postgres postgres 4.5M Jun 19 03:38 pg_tblspc/16384/PG_9.2_201204301/70815/10088356_fsm
-rw------- 1 postgres postgres 288K Jun 19 05:12 pg_tblspc/16384/PG_9.2_201204301/70815/10088356_vm
du -sh pg_tblspc/16384/PG_9.2_201204301/70815/10088356*
415M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356
405M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.1
427M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.10
428M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.11
425M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.12
425M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.13
427M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.14
427M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.15
428M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.16
237M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.17
403M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.2
413M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.3
427M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.4
432M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.5
423M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.6
425M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.7
433M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.8
428M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356.9
3.5M    pg_tblspc/16384/PG_9.2_201204301/70815/10088356_fsm
36K     pg_tblspc/16384/PG_9.2_201204301/70815/10088356_vm

写速度测试补充
测试模型 : 
postgres=# create table test (id int primary key, info text, crt_time timestamp);
CREATE TABLE
postgres=# create or replace function f(v_id int) returns void as 
$$
declare
begin
  update test set info=md5(now()::text),crt_time=now() where id=v_id;
  if not found then
    insert into test values (v_id, md5(now()::text), now());         
  end if;
  return;
  exception when others then
    return;
end;
$$ language plpgsql strict;
CREATE FUNCTION

$ vi test.sql
\setrandom vid 1 5000000
select f(:vid);
测试结果
ZFS结果
pgbench -M prepared -n -r -f ./test.sql -c 8 -j 4 -T 30
transaction type: Custom query
scaling factor: 1
query mode: prepared
number of clients: 8
number of threads: 4
duration: 30 s
number of transactions actually processed: 1529642
tps = 50987.733547 (including connections establishing)
tps = 50998.421896 (excluding connections establishing)
statement latencies in milliseconds:
        0.002064        \setrandom vid 1 5000000
        0.153280        select f(:vid);
postgres=# select count(*) from test;
  count  
---------
 1317641
(1 row)
存储主机结果
pgbench -M prepared -n -r -f ./test.sql -c 8 -j 4 -T 30
transaction type: Custom query
scaling factor: 1
query mode: prepared
number of clients: 8
number of threads: 4
duration: 30 s
number of transactions actually processed: 717486
tps = 23915.516813 (including connections establishing)
tps = 23921.744263 (excluding connections establishing)
statement latencies in milliseconds:
        0.003088        \setrandom vid 1 5000000
        0.328250        select f(:vid);
postgres=# select count(*) from test;
 count  
--------
 668395
(1 row)

[其他]
1. 有slog和没有slog的pg_test_fsync的测试结果
有slog
pg_test_fsync
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                n/a*
        fdatasync                         303.897 ops/sec    3291 usecs/op
        fsync                             329.612 ops/sec    3034 usecs/op
        fsync_writethrough                            n/a
        open_sync                                    n/a*
* This file system and its mount options do not support direct
I/O, e.g. ext4 in journaled mode.

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                n/a*
        fdatasync                         328.331 ops/sec    3046 usecs/op
        fsync                             326.671 ops/sec    3061 usecs/op
        fsync_writethrough                            n/a
        open_sync                                    n/a*
* This file system and its mount options do not support direct
I/O, e.g. ext4 in journaled mode.

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write                    n/a*
         2 *  8kB open_sync writes                   n/a*
         4 *  4kB open_sync writes                   n/a*
         8 *  2kB open_sync writes                   n/a*
        16 *  1kB open_sync writes                   n/a*

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close               324.818 ops/sec    3079 usecs/op
        write, close, fsync               325.872 ops/sec    3069 usecs/op

Non-Sync'ed 8kB writes:
        write                           78023.363 ops/sec      13 usecs/op

没有slog
pg_test_fsync 
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                n/a*
        fdatasync                         325.150 ops/sec    3076 usecs/op
        fsync                             320.737 ops/sec    3118 usecs/op
        fsync_writethrough                            n/a
        open_sync                                    n/a*
* This file system and its mount options do not support direct
I/O, e.g. ext4 in journaled mode.

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                n/a*
        fdatasync                         313.791 ops/sec    3187 usecs/op
        fsync                             313.884 ops/sec    3186 usecs/op
        fsync_writethrough                            n/a
        open_sync                                    n/a*
* This file system and its mount options do not support direct
I/O, e.g. ext4 in journaled mode.

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write                    n/a*
         2 *  8kB open_sync writes                   n/a*
         4 *  4kB open_sync writes                   n/a*
         8 *  2kB open_sync writes                   n/a*
        16 *  1kB open_sync writes                   n/a*

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close               328.620 ops/sec    3043 usecs/op
        write, close, fsync               328.271 ops/sec    3046 usecs/op

Non-Sync'ed 8kB writes:
        write                           71741.498 ops/sec      14 usecs/op
通过iostat可以看到, 有SLOG时, pg_test_fsync全压到slog那个块设备了, 而没有slog的情况下, 压力都在vdev的块设备上, 这里是raidz所以, 全部在所有的设备上.
如果slog改成ssd, pg_test_fsync将会有很好的表现. 例如使用/dev/shm模拟ssd
# cd /dev/shm
# dd if=/dev/zero of=./test.img bs=1k count=2048000
# zpool add zp1 log /dev/shm/test.img 
# zpool status
  pool: zp1
 state: ONLINE
  scan: none requested
config:

        NAME                 STATE     READ WRITE CKSUM
        zp1                  ONLINE       0     0     0
          raidz1-0           ONLINE       0     0     0
            sda              ONLINE       0     0     0
            sdb              ONLINE       0     0     0
            sdc              ONLINE       0     0     0
            sdd              ONLINE       0     0     0
            sde              ONLINE       0     0     0
            sdf              ONLINE       0     0     0
            sdg              ONLINE       0     0     0
            sdh              ONLINE       0     0     0
            sdi              ONLINE       0     0     0
        logs
          /dev/shm/test.img  ONLINE       0     0     0
        spares
          sdj                AVAIL 
使用内存作为slog后, fsync显然提高了.
pg_test_fsync 
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                n/a*
        fdatasync                        6695.657 ops/sec     149 usecs/op
        fsync                            8079.750 ops/sec     124 usecs/op
        fsync_writethrough                            n/a
        open_sync                                    n/a*
* This file system and its mount options do not support direct
I/O, e.g. ext4 in journaled mode.

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                n/a*
        fdatasync                        6247.616 ops/sec     160 usecs/op
        fsync                            3140.959 ops/sec     318 usecs/op
        fsync_writethrough                            n/a
        open_sync                                    n/a*
* This file system and its mount options do not support direct
I/O, e.g. ext4 in journaled mode.

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write                    n/a*
         2 *  8kB open_sync writes                   n/a*
         4 *  4kB open_sync writes                   n/a*
         8 *  2kB open_sync writes                   n/a*
        16 *  1kB open_sync writes                   n/a*

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close              6330.570 ops/sec     158 usecs/op
        write, close, fsync              6989.741 ops/sec     143 usecs/op

Non-Sync'ed 8kB writes:
        write                           77800.273 ops/sec      13 usecs/op

不过这里要说一下, 如果PostgreSQL 关闭了synchronous_commit, 其实普通盘的slog就够用了.
后面的写测试就是很好的证明.

2. 创建zpool的块设备最好是by-id的, 因为在Linux下设备名可能发生变更. 例如/dev/sda重启后可能变成了/dev/sdb
对于slog, 这是不允许的, 将导致数据崩溃.
查看by-id
# ll /dev/disk/by-id/*
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064b0a6dc -> ../../sdd
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064b0a6dc-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064b0a6dc-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064b563d5 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064b563d5-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064b563d5-part9 -> ../../sda9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbc776 -> ../../sde
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbc776-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbc776-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbf23b -> ../../sdh
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbf23b-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbf23b-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbfc66 -> ../../sdf
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbfc66-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bbfc66-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bc046a -> ../../sdj
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bc046a-part1 -> ../../sdj1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bc046a-part9 -> ../../sdj9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bf56da -> ../../sdc
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bf56da-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bf56da-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bf65dd -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bf65dd-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064bf65dd-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064c02880 -> ../../sdi
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064c02880-part1 -> ../../sdi1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064c02880-part9 -> ../../sdi9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064c04f5a -> ../../sdg
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064c04f5a-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-35000c50064c04f5a-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/scsi-3600605b0079e70801b0e33ff07ebffa3 -> ../../sdk
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-3600605b0079e70801b0e33ff07ebffa3-part1 -> ../../sdk1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-3600605b0079e70801b0e33ff07ebffa3-part2 -> ../../sdk2
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/scsi-3600605b0079e70801b0e33ff07ebffa3-part3 -> ../../sdk3
lrwxrwxrwx 1 root root 10 Jun 19 12:43 /dev/disk/by-id/scsi-3600605b0079e70801b0e33ff07ebffa3-part4 -> ../../sdk4
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064b0a6dc -> ../../sdd
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064b0a6dc-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064b0a6dc-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064b563d5 -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064b563d5-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064b563d5-part9 -> ../../sda9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbc776 -> ../../sde
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbc776-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbc776-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbf23b -> ../../sdh
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbf23b-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbf23b-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbfc66 -> ../../sdf
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbfc66-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bbfc66-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bc046a -> ../../sdj
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bc046a-part1 -> ../../sdj1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bc046a-part9 -> ../../sdj9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bf56da -> ../../sdc
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bf56da-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bf56da-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bf65dd -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bf65dd-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064bf65dd-part9 -> ../../sdb9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064c02880 -> ../../sdi
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064c02880-part1 -> ../../sdi1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064c02880-part9 -> ../../sdi9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064c04f5a -> ../../sdg
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064c04f5a-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x5000c50064c04f5a-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 Jun 19  2014 /dev/disk/by-id/wwn-0x600605b0079e70801b0e33ff07ebffa3 -> ../../sdk
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x600605b0079e70801b0e33ff07ebffa3-part1 -> ../../sdk1
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x600605b0079e70801b0e33ff07ebffa3-part2 -> ../../sdk2
lrwxrwxrwx 1 root root 10 Jun 19  2014 /dev/disk/by-id/wwn-0x600605b0079e70801b0e33ff07ebffa3-part3 -> ../../sdk3
lrwxrwxrwx 1 root root 10 Jun 19 12:43 /dev/disk/by-id/wwn-0x600605b0079e70801b0e33ff07ebffa3-part4 -> ../../sdk4
如果已经使用了/dev/sd*, 可以删除后重新加入.
# zpool remove zp1 /dev/sdk4
# zpool add zp1 log /dev/disk/by-id/scsi-3600605b0079e70801b0e33ff07ebffa3-part4
slog一般不需要太大. 有几个G就差不多了. L2ARC则越大越好.


[小结]
测试面比较窄, 但是反映了一些问题.
1. 因为使用了SLOG, 所以ZFS写性能超出了这样配置的存储. 所以还是比较适合用作数据库的.
2. 因为这里的读测试还没有超出内存大小. 显然还不能说明问题. 超出内存后18G表的查询需要70秒左右.  如果加上SSD作为L2ARC的话, 读性能还能有提高.  
3. 使用zfs压缩后, 存储空间是小了, 同时还要考虑压缩和解压带来的延迟和CPU开销. 
4. slog很重要, 最好mirror , 如果底层是raid的话, 可以不mirror. 这里的用内存作为例子千万别模仿, 我只是模仿ssd.

[参考 ]
目录
相关文章
|
2月前
|
人工智能 运维 Kubernetes
Serverless 应用引擎 SAE:为传统应用托底,为 AI 创新加速
在容器技术持续演进与 AI 全面爆发的当下,企业既要稳健托管传统业务,又要高效落地 AI 创新,如何在复杂的基础设施与频繁的版本变化中保持敏捷、稳定与低成本,成了所有技术团队的共同挑战。阿里云 Serverless 应用引擎(SAE)正是为应对这一时代挑战而生的破局者,SAE 以“免运维、强稳定、极致降本”为核心,通过一站式的应用级托管能力,同时支撑传统应用与 AI 应用,让企业把更多精力投入到业务创新。
483 30
|
3月前
|
存储 人工智能 Serverless
函数计算进化之路:AI 应用运行时的状态剖析
AI应用正从“请求-响应”迈向“对话式智能体”,推动Serverless架构向“会话原生”演进。阿里云函数计算引领云上 AI 应用 Serverless 运行时技术创新,实现性能、隔离与成本平衡,开启Serverless AI新范式。
512 12
|
8月前
|
SQL 分布式计算 Serverless
鹰角网络:EMR Serverless Spark 在《明日方舟》游戏业务的应用
鹰角网络为应对游戏业务高频活动带来的数据潮汐、资源弹性及稳定性需求,采用阿里云 EMR Serverless Spark 替代原有架构。迁移后实现研发效率提升,支持业务快速发展、计算效率提升,增强SLA保障,稳定性提升,降低运维成本,并支撑全球化数据架构部署。
927 56
鹰角网络:EMR Serverless Spark 在《明日方舟》游戏业务的应用
|
6月前
|
存储 编解码 Serverless
Serverless架构下的OSS应用:函数计算FC自动处理图片/视频转码(演示水印添加+缩略图生成流水线)
本文介绍基于阿里云函数计算(FC)和对象存储(OSS)构建Serverless媒体处理流水线,解决传统方案资源利用率低、运维复杂、成本高等问题。通过事件驱动机制实现图片水印添加、多规格缩略图生成及视频转码优化,支持毫秒级弹性伸缩与精确计费,提升处理效率并降低成本,适用于高并发媒体处理场景。
355 0
|
8月前
|
人工智能 开发框架 安全
Serverless MCP 运行时业界首发,函数计算让 AI 应用最后一公里提速
作为云上托管 MCP 服务的最佳运行时,函数计算 FC 为阿里云百炼 MCP 提供弹性调用能力,用户只需提交 npx 命令即可“零改造”将开源 MCP Server 部署到云上,函数计算 FC 会准备好计算资源,并以弹性、可靠的方式运行 MCP 服务,按实际调用时长和次数计费,欢迎你在阿里云百炼和函数计算 FC 上体验 MCP 服务。
746 30
|
3月前
|
人工智能 运维 安全
聚焦 AI 应用基础设施,云栖大会 Serverless AI 全回顾
2025 年 9 月 26 日,为期三天的云栖大会在杭州云栖小镇圆满闭幕。随着大模型技术的飞速发展,我们正从云原生时代迈向一个全新的 AI 原生应用时代。为了解决企业在 AI 应用落地中面临的高成本、高复杂度和高风险等核心挑战,阿里云基于函数计算 FC 发布一系列重磅服务。本文将对云栖大会期间 Serverless+AI 基础设施相关内容进行全面总结。
|
3月前
|
人工智能 Kubernetes 安全
重塑云上 AI 应用“运行时”,函数计算进化之路
回顾历史,电网的修建,深刻地改变了世界的经济地理和创新格局。今天,一个 AI 原生的云端运行时的进化,其意义也远不止于技术本身。这是一次设计哲学的升华:从“让应用适应平台”到“让平台主动理解和适应智能应用”的转变。当一个强大、易用、经济且安全的 AI 运行时成为像水电一样的基础设施时,它将极大地降低创新的门槛。一个独立的开发者、一个小型创业团队,将有能力去创造和部署世界级的 AI 应用。这才是技术平权的真谛,是激发全社会创新潜能的关键。
|
11月前
|
人工智能 运维 物联网
云大使 X 函数计算 FC 专属活动上线!享返佣,一键打造 AI 应用
如今,AI 技术已经成为推动业务创新和增长的重要力量。但对于许多企业和开发者来说,如何高效、便捷地部署和管理 AI 应用仍然是一个挑战。阿里云函数计算 FC 以其免运维的特点,大大降低了 AI 应用部署的复杂性。用户无需担心底层资源的管理和运维问题,可以专注于应用的创新和开发,并且用户可以通过一键部署功能,迅速将 AI 大模型部署到云端,实现快速上线和迭代。函数计算目前推出了多种规格的云资源优惠套餐,用户可以根据实际需求灵活选择。
|
8月前
|
Cloud Native Serverless 流计算
云原生时代的应用架构演进:从微服务到 Serverless 的阿里云实践
云原生技术正重塑企业数字化转型路径。阿里云作为亚太领先云服务商,提供完整云原生产品矩阵:容器服务ACK优化启动速度与镜像分发效率;MSE微服务引擎保障高可用性;ASM服务网格降低资源消耗;函数计算FC突破冷启动瓶颈;SAE重新定义PaaS边界;PolarDB数据库实现存储计算分离;DataWorks简化数据湖构建;Flink实时计算助力风控系统。这些技术已在多行业落地,推动效率提升与商业模式创新,助力企业在数字化浪潮中占据先机。
482 12
|
8月前
|
人工智能 开发框架 运维
Serverless MCP 运行时业界首发,函数计算让 AI 应用最后一公里提速
Serverless MCP 运行时业界首发,函数计算支持阿里云百炼 MCP 服务!阿里云百炼发布业界首个全生命周期 MCP 服务,无需用户管理资源、开发部署、工程运维等工作,5 分钟即可快速搭建一个连接 MCP 服务的 Agent(智能体)。作为云上托管 MCP 服务的最佳运行时,函数计算 FC 为阿里云百炼 MCP 提供弹性调用能力。
 Serverless MCP 运行时业界首发,函数计算让 AI 应用最后一公里提速

热门文章

最新文章