Montag, 30. Januar 2017

Fun with RobinHood 3 , Lustre 2.9 and PCI-e SSD: /dev/nvme0n1

------------------- TEST PCI-e SSD XFS---------------
MySQL Data on nvram

time rbh-du -d -H  -f /etc/robinhood.d/tmpfs/lnew.conf /lnew/*

real    6m32.658s
user    0m18.175s
sys     0m9.272s

we just run second time to be sure the numbers are ok.

real    6m22.549s
user    0m24.533s
sys     0m10.950s

------------------- TEST PCI-E SSD ext4 -------------

real    5m46.431s
user    0m16.900s
sys     0m9.288s

real    5m45.627s
user    0m15.509s
sys     0m8.985s


------------ TEST ZFS  128k --------------------
MySQL data on zpool RAIDZ0 8x3TBHGST Disks

time rbh-du -d -H  -f /etc/robinhood.d/tmpfs/lnew.conf /lnew/*

real    13m34.484s
user    0m17.150s
sys     0m9.125s
------------------- Same ZFS but with 8K blocksize ............
real    15m20.003s
user    0m16.427s
sys    0m8.811s

Interesting the 8K mithos doesnot help with performance.
What about 64K? It is same as 128k recordsize.

--------------- Lustre 2.9  mds-----------------
Directly on lustrefs

time du -bsh /lnew/*

real    36m12.990s
user    0m24.658s
sys     8m57.458s



Interesting results for the InnoDB  speed on the  different file systems:

ext4


Keine Kommentare: