7

EXT4 vs. XFS vs. ASM vs. ASM + OEL, which one performs better? Taking it to the...

 3 years ago
source link: https://myvirtualcloud.net/ext4-vs-xfs-vs-asm-vs-asm-oel-which-one-performs-better-taking-it-to-the-next-level/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

EXT4 vs. XFS vs. ASM vs. ASM + OEL, which one performs better? Taking it to the next level.

  • 02/17/2019

My previous article on, EXT4 vs XFS for Oracle, generated some commentary both here in my blog and on Reddit. For this reason, I took the time to extend the same benchmark to Oracle ASM (Automatic Storage Management) and also to Oracle Enterprise Linux (OEL).

As you can imagine there is not a single and simple answer to this question because it will always depend on a number of variables – so I tried to eliminate most of them.

  • The same server
  • The same VM configuration
  • The same Hypervisor
  • The same Oracle configuration
  • The same SLOB configuration (30min with a 70:30 Read/Write ratio)
  • The same LVM configuration
  • The same storage configuration (I am using Datrium DVX, which uses localhost SSDs to perpetually read data I/O locally at bus speeds. More info on the exact config here)

The only aspect changing between tests is the datadisk filesystem, being either EXT4, XFS, ASM or ASM + OEL. Please refer to my previous article here for more information on the exact disk configuration, benchmark, and measurement method.

Additionally, considering that file systems can be optimized in countless ways I have accepted all the default configurations for fdisk, LVM, FSTAB, ASM, and the file systems. That means that this comparison is only valid for the pristine default installations.

To collect the data I used Chris Buckel article on SLOB Sustained Throughput Test: Interpreting SLOB Results, and I graphed the same way.

However, looking at every data point in a graph doesn’t really give us a solid understanding of what is happening because the SLOB workload is dynamic and there are peaks and deeps during a 30 minutes run.

To eliminate outliers I replaced the time series with a trendline that uses a moving average of 10 data points (there are roughly 605 data points for each SLOB run). This approach removed anomalies while maintaining data fidelity.

oracle_compare_latency-1024x741.png

Results

During tests, XFS with CentOS/7 provided the highest number of Read IO/sec (59,386) in a single sample, and ASM + OEL provided the highest number of Write IO/sec (15,594). However, the highest average IO/sec for both Read and Write IOs across all benchmarks was provided by ASM + OEL, making ASM with Oracle Enterprise Linux the winner in terms of performance.

Conclusion

The conclusion for this Oracle SLOB test that uses 8Kb block size I/O is that ASM with Oracle Enterprise Linux outperforms all tested options, including ASM with CentOS – further, OEL + ASM provided low CPU utilization.

To my surprise, it is clear that Oracle has optimized Oracle Linux for ASM and there are advantages in terms of performance (for reads and writes) and supportability. As always, your mileage may vary.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK