博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Examining Huge Pages or Transparent Huge Pages performance
阅读量:5931 次
发布时间:2019-06-19

本文共 8681 字,大约阅读时间需要 28 分钟。

All modern processors use page-based mechanisms to translate the user-space processes virtual addresses into physical addresses for RAM. The pages are commonly 4KB in size and the processor can hold a limited number of virtual-to-physical address mappings in the Translation Lookaside Buffers (TLB). The number TLB entries ranges from tens to hundreds of mappings. This limits a processor to a few

megabytes of memory it can address without changing the TLB entries. When a virtual-to-physical address mapping is not in the TLB the processor must do an expensive computation to generate a new virtual-to-physical address mapping.

To increase the amount of memory the processor can address without performing the expensive TLB updates many processors allow larger page sizes to be used. On x86_64 processors huge pages are 2MB, 512 times larger than regular 4KB pages. In ideal situations huge pages can decrease the overhead of the TLB updates (misses). However, huge page use can increase memory pressure, add latency for minor pages faults, and add overhead when splitting huge pages or coalescing normal sized pages into huge pages.

There are two mechanisms available for huge pages in Linux: the hugepages and Transparent Huge Pages (THP). Explicit configuration is required for the original hugepages mechanism. The newer transparent hugepage (THP) mechanism will automatically use larger pages for dynamically allocated memory in Red Hat Enterprise Linux 6.

To determine whether the newer Transparent HugePages (THP) or the older HugePages mechanism are being used, look at the output of /proc/meminfo as below:

$ cat /proc/meminfo|grep HugeAnonHugePages:   3049472 kBHugePages_Total:       0HugePages_Free:        0HugePages_Rsvd:        0HugePages_Surp:        0Hugepagesize:       2048 kB

The AnonHugePages entry lists the number of pages that the newer Transparent Huge Page mechanism currently has in use. For this machine there are 309472kB, 1489 huge pages each 2048kB in size.

In this case there are zero pages in the pool of the older hugepage mechanism as shown by HugePages_Total of 0. The HugePages_Free shows how many pages are still available for allocation, which is going to be less than or equal to HugePages_Total. The number of HugePages in use can be computed as HugePages_TotalHugePagesFree. For more information about the configuration of HugePages see .

Determining whether page fault latency is due to huge pages use

Huge page use can reduce the number of TLB updates required to access large regions of memory and reducing the overall cost of TLB updates but increase costs and latency for other operations. When a user-space application is given a range of addresses for a memory allocation the assignment of a physical page is deferred until the first time the page is accessed. To prevent information leakage from the previous user of the page the kernel writes zeros in the entire page. For a 4096 byte page this is a relatively short operation and will only take a couple of microseconds. The x86 hugepages are 2MB in size, 512 times larger than the normal page. Thus, the operation may take hundreds of microseconds and impact the operation of latency sensitive code. Below is a simple SystemTap command line script to show which applications have huge pages zeroed out and how long those operations take. It will run until cntl-c is pressed.

stap  -e 'global huge_clear probe kernel.function("clear_huge_page").return {huge_clear [execname(), pid()] <<< (gettimeofday_us() - @entry(gettimeofday_us()))}'

Below is the a run of the above SystemTap clear huge page script. The script will output a list sorted from the executable name and process with the most huge page clears to the least. The @count is the number of times that process encountered a huge page clear operation. Following that information is time statistics displayed in microseconds of wall clock time. The @min and the @max are the minimum and the maximum time respectively to clear out a page. The @sum is the total wall clock time. In the example below the ld process 17050 took a total 1924 microseconds to clear out huge pages and on average those page clears took 128 microseconds.

#  stap  -e 'global huge_clear probe kernel.function("clear_huge_page").return {huge_clear [execname(), pid()] <<< (gettimeofday_us() - @entry(gettimeofday_us()))}'^CChuge_clear["ld",17050] @count=15 @min=114 @max=148 @sum=1924 @avg=128huge_clear["ld",27996] @count=13 @min=121 @max=160 @sum=1674 @avg=128huge_clear["ld",19595] @count=11 @min=86 @max=181 @sum=1251 @avg=113huge_clear["cc1",22840] @count=6 @min=108 @max=180 @sum=862 @avg=143huge_clear["ld",15640] @count=5 @min=160 @max=599 @sum=1274 @avg=254huge_clear["ld",27733] @count=4 @min=95 @max=145 @sum=443 @avg=110huge_clear["cc1",24455] @count=4 @min=103 @max=159 @sum=535 @avg=133huge_clear["cc1",20431] @count=3 @min=112 @max=172 @sum=408 @avg=136huge_clear["cc1",21906] @count=3 @min=125 @max=159 @sum=431 @avg=143

The system may attempt to save memory by using the same physical page for multiple processes. When one of the processes attempts to modify the contents of the page a new copy needs to be made of the page. The Copy-On-Write (COW) operation for the huge page can be observed with a script very similar to the one watching for huge pages to be zeroed out. Below is the script to watch for Copy-On-Write for huge pages and it will output data in a similar format.

stap  -e 'global huge_cow probe kernel.function("copy_user_huge_page").return {huge_cow [execname(), pid()] <<< (gettimeofday_us() - @entry(gettimeofday_us()))}'

Determining whether huge page split and collapse operations are affecting performance

Because some portions of the kernel code only work with normal-sized pages the kernel may convert a huge page into a set of normal-sized pages using a split operation. One can identify if split operations are occurring with the following systemtap script:

stap -e 'probe kernel.function("split_huge_page") { printf("%s: %s(%d)n", pp(), execname(), pid());}'

Below is an example run of the script showing which processes are performing split huge page operations. In this case the same virtualized guest machine (qemu-system-x86_64) has some huge pages splits.

# stap -e 'probe kernel.function("split_huge_page") { printf("%s: %s(%d)n", pp(), execname(), pid());}'kernel.function("split_huge_page@include/linux/huge_mm.h:103"): qemu-system-x86(9473)kernel.function("split_huge_page@include/linux/huge_mm.h:103"): qemu-system-x86(9473)kernel.function("split_huge_page@include/linux/huge_mm.h:103"): plugin-containe(16582)kernel.function("split_huge_page@include/linux/huge_mm.h:103"): StreamT~ns #697(2942)

The inverse of the huge page split operation is the huge page collapse operation that converts a set of normal-sized pages into a single huge page. It is desirable to have a range of addresses need fewer TLB entries, but the conversion process is expensive because the system needs to find a candidate set of pages to group together and then copy all the memory from the possibly scattered normal-sized pages into a single huge page. The khugepaged kernel thread searches for candidates pages to collapse into a single huge page. Even if khugepaged is not successful converting normal-sized pages into huge pages it may still be taking processor time to search for candidate pages. You can see if the khugepaged kernel thread is taking a significant amount of processor time with:

top -p `pidof khugepaged`

If you want to see when the huge page collapse operations occur, the following will note each time khugepaged is able to collapse normal-sized pages into huge pages:

stap -e 'probe kernel.function("collapse_huge_page") {  printf("%-25s: %s (%d) collapse_huge_pagen", tz_ctime(gettimeofday_s()), execname(), pid())}'

The above one line script will generate output like the following:

$ stap -e 'probe kernel.function("collapse_huge_page") {  printf("%-25s: %s (%d) collapse_huge_pagen", ctime(gettimeofday_s()), execname(), pid())}'Mon Oct 21 15:12:44 2013 : khugepaged (88) collapse_huge_pageMon Oct 21 15:13:44 2013 : khugepaged (88) collapse_huge_pageMon Oct 21 15:13:54 2013 : khugepaged (88) collapse_huge_pageMon Oct 21 15:14:54 2013 : khugepaged (88) collapse_huge_pageMon Oct 21 15:15:04 2013 : khugepaged (88) collapse_huge_page

TIPS:

if stap run failed:

# semantic error: missing x86_64 kernel/module debuginfo [man warning::debuginfo] under '/lib/modules/3.10.0-327.ali2000.alios7.x86_64/build'

please run:

# debuginfo-install kernel

References

  • https://developers.redhat.com/blog/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/

转载地址:http://bxktx.baihongyu.com/

你可能感兴趣的文章
基于oauth 2.0 实现第三方开放平台
查看>>
粗谈架构
查看>>
小程序引导用户下载APP
查看>>
Objective-C开发使用技巧总结
查看>>
敏捷如何应对变化:敏捷团队检查和适应
查看>>
武汉区块链软件公司:区块链游戏和一般的游戏有什么区别?
查看>>
ActionView - 更好用的团队敏捷开发工具
查看>>
使用vue-cli创建运行Vue项目
查看>>
Flutter启动流程简析
查看>>
图片居中
查看>>
在已有vu项目中引入vux
查看>>
电子商务java b2b b2c o2o平台
查看>>
JS数据类型分类和判断
查看>>
两位数据科学家跟你聊聊AI那点事儿(附学习资料)
查看>>
[译] 为什么 HTML 中复选框样式难写 — 本文给你答案
查看>>
环境配置01-win10+Cenos双系统安装过程记录
查看>>
请教nodejs中promise、setTimeout、setImmediate在eventloop中的执行时机问题
查看>>
从零开始,如何打造出一个好的运营团队
查看>>
python学习干货教程(2):环境变量配置
查看>>
LeetCode每日一题: 种花问题(No.605)
查看>>