TY - GEN
T1 - SWL
T2 - LCTES'07: 2007 ACM SIGPLAN-SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems
AU - In, Jihyun
AU - Shin, Ilhoon
AU - Kim, Hyojun
PY - 2007
Y1 - 2007
N2 - As mobile phones become increasingly multifunctional, the number and size of applications installed in phones are rapidly increasing. Consequently, mobile phones require more hardware resources such as NOR/NAND flash memory and DRAM, and their production cost is accordingly becoming higher. One candidate solution to reduce production cost is demand paging using MMU. However, demand paging causes unpredictably long page fault latency, and as such mobile phone manufacturers are reluctant to deploy this scheme. In this paper, we present a method that reduces the long latency of page faults by performing page fault handling in a parallelized manner, considering the characteristics of NAND-Type flash memory. We also discuss how to modify the existing page cache replacement policies so that they can exploit the benefits of the parallelized page fault handler. Experimental results show that the parallelized page fault handler improves the worst case latency of page faults significantly, by up to roughly 20%, and that the modified page cache replacement policies improve both the average and worst instruction fetch time.
AB - As mobile phones become increasingly multifunctional, the number and size of applications installed in phones are rapidly increasing. Consequently, mobile phones require more hardware resources such as NOR/NAND flash memory and DRAM, and their production cost is accordingly becoming higher. One candidate solution to reduce production cost is demand paging using MMU. However, demand paging causes unpredictably long page fault latency, and as such mobile phone manufacturers are reluctant to deploy this scheme. In this paper, we present a method that reduces the long latency of page faults by performing page fault handling in a parallelized manner, considering the characteristics of NAND-Type flash memory. We also discuss how to modify the existing page cache replacement policies so that they can exploit the benefits of the parallelized page fault handler. Experimental results show that the parallelized page fault handler improves the worst case latency of page faults significantly, by up to roughly 20%, and that the modified page cache replacement policies improve both the average and worst instruction fetch time.
KW - Demand paging
KW - NAND flash memory
KW - Page fault handler
KW - Page replacement
KW - Parallelization
UR - https://www.scopus.com/pages/publications/34547983826
U2 - 10.1145/1254766.1254806
DO - 10.1145/1254766.1254806
M3 - Conference contribution
AN - SCOPUS:34547983826
SN - 1595936327
SN - 9781595936325
T3 - Proceedings of the ACM SIGPLAN Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES)
SP - 217
EP - 226
BT - LCTES'07
Y2 - 13 June 2007 through 15 June 2007
ER -