본문 바로가기

Programming

QEMU Internals


QEMU 1.7.0 을 분석하면서 함수명들을 구글링하다가 아주 좋은 블로그를 찾았다.

MIPS 에대해서 메모리 로드/스토어 명령의 QEMU 상의 Call Path 및 다양한 핵심 부분에 대해서

누군가가 어느정도 정리를 해놓았다. 일단 대충 봤지만 처음에 분석하면서 생각하던데로

target-i386 디렉토리 밑의 translate.c 에서 명령어 의 OP 별로 파싱해서 TCG 엔진을 불르고

Translation Block 을 생성하는 작업을 하는것 같다.  또한 루트 디렉토리의 cpu-exec.c 에서

int cpu_exec(CPUArchState *env) 함수가 실제 에뮬레이션의 본체 루프 역할을 하는게 맞는것 같다.

여기서 아키텍쳐별로 돌아가는 구조가 다른데, 이부분에서 translate.c 의 opcode 매핑함수를 호출하고

TCG 는 변환한 바이너리를 캐싱해놓은다음 다시 쓰일수 있게 하는 그런 구조로 보인다.

또한 QEMU 에서는 Guest VA 를 Guest PA 로 변환한 뒤에 이를 다시 Host VA 로 변환하기 위해서

softmmu 를 사용하는데, 기본적으로 TLB 와 비슷한 형태의 구조를 가지고 있다.  문제는 소스코드상에서

이 softmmu 의 코드가 곧바로 존재하는 것이 아니고, 소스상에서는 각각 아키텍쳐별로 어떤형태로 softmmu

의 코드가 구성될지 설정하는 파일들만 존재하고 실제로는 make 해서 컴파일할때 컴파일될 코드들이 정해진다.

그리고 아래의 글은 QEMU 0.10.x 버전의 MIPS 기준인데 1.7.0 i386 의 경우 cpu_x86_handle_mmu_fault 함수가 CR2 에

세그멘테이션 폴트를 일으킨 선형주소를 넣고 리턴 1 하는 코드를 정의하고있다.  여기서부터 전후로 xref 해서

추적해보면 여러가지를 알수 있을것같다...


그나저나 참고로 QEMU 소스 컴파일할때 ./configure --target-list=i386-softmmu 이런식으로 옵션주고 make 하면

해당 아키텍쳐부분만 컴파일해서 훨씬빠르게 할수있다 -_- 지금까지 무슨삽질을...


아무튼 MMU Page Fault 루틴에 간단히 코드를 심고 그위에 바닐라 리눅스 3.7.1 을 올려서 다음과같이 확인했다.




출처 : http://vm-kernel.org/blog/category.html?cat=QEMU



qemu internal part 1: the code path of memory load emulation


In qemu, there are two different meanings of target. The first meaning of ‘target’ means the emulated target machine architecture. For example, when emulating mips machine on x86, the target is mips and host is x86. However, in tcg(tiny code generator), target has a different meaning. It means the generated binary architecture. In the example of emulating mips on x86, in tcg the target means x86 because tcg will generate x86 binary.


This article is based on qemu version 0.10.5 and target machine emulated is little endian mips. I will summarize the code path of mips lw instruction emulation in qemu.


Function decode_opc is used for decoding all the fetched instructions before tcg generating the target binary.

target-mips/translate.c


7566 static void decode_opc (CPUState *env, DisasContext *ctx)


7960     case OPC_LB ... OPC_LWR: /* Load and stores */

7961     case OPC_SB ... OPC_SW:

7962     case OPC_SWR:

7963     case OPC_LL:

7964     case OPC_SC:

7965          gen_ldst(ctx, op, rt, rs, imm);

7966          break;

It will call function gen_ldst which is also in target-mips/translate.c.

target-mips/translate.c


973 static void gen_ldst (DisasContext *ctx, uint32_t opc, int rt,

974                       int base, int16_t offset)


1046     case OPC_LW:

1047         op_ldst_lw(t0, ctx);

1048         gen_store_gpr(t0, rt);

1049         opn = "lw";

1050         break;

Function op_ldst_lw will generate the target binary which fetches the value from the emulated guest memory and gen_store_gpr will store this value to the emulated cpu’s general register rt. Function op_ldst_lw is generated by the macro OP_LD.


target-mips/translate.c


901 #define OP_LD(insn,fname)                                        \

902 static inline void op_ldst_##insn(TCGv t0, DisasContext *ctx)    \

903 {                                                                \

904     tcg_gen_qemu_##fname(t0, t0, ctx->mem_idx);                  \

905 }


910 OP_LD(lw,ld32s);

We can find that op_ldst_lw is a function which calls function tcg_gen_qemu_ld32s. It will output the OPC(INDEX_op_qemu_ld32u) and args to gen_opc_ptr.


tcg/tcg-op.h


1793 static inline void tcg_gen_qemu_ld32s(TCGv ret, TCGv addr, int mem_index)

1794 {

1795 #if TARGET_LONG_BITS == 32

1796     tcg_gen_op3i_i32(INDEX_op_qemu_ld32u, ret, addr, mem_index);

1797 #else

1798     tcg_gen_op4i_i32(INDEX_op_qemu_ld32u, TCGV_LOW(ret), TCGV_LOW(addr),

1799                      TCGV_HIGH(addr), mem_index);

1800     tcg_gen_sari_i32(TCGV_HIGH(ret), TCGV_LOW(ret), 31);

1801 #endif

1802 }


99 static inline void tcg_gen_op3i_i32(int opc, TCGv_i32 arg1, TCGv_i32 arg2,

100                                     TCGArg arg3)

101 {

102     *gen_opc_ptr++ = opc;

103     *gen_opparam_ptr++ = GET_TCGV_I32(arg1);

104     *gen_opparam_ptr++ = GET_TCGV_I32(arg2);

105     *gen_opparam_ptr++ = arg3;

106 }

The path of generation of target binary code of tcg is as following.

cpu_gen_code->tcg_gen_code->tcg_gen_code_common->tcg_reg_alloc_op->tcg_out_op

tcg/i386/tcg-target.c


856 static inline void tcg_out_op(TCGContext *s, int opc,

857                               const TCGArg *args, const int *const_args)


1041     case INDEX_op_qemu_ld32u:

1042         tcg_out_qemu_ld(s, args, 2);

1043         break;


431 static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args,

432                             int opc)


508 #if TARGET_LONG_BITS == 32

509     tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_EDX, mem_index);

510 #else

511     tcg_out_mov(s, TCG_REG_EDX, addr_reg2);

512     tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_ECX, mem_index);

513 #endif

514     tcg_out8(s, 0xe8);

515     tcg_out32(s, (tcg_target_long)qemu_ld_helpers[s_bits] -

516               (tcg_target_long)s->code_ptr - 4);

In line 514, tcg outputs 0xe8 which means a call instruction in x86. It will call the functions in array qemu_ld_helpers. The args to the functions is passed by registers EAX,EDX and ECX.

tcg/i386/tcg-target.c


413 static void *qemu_ld_helpers[4] = {

414     __ldb_mmu,

415     __ldw_mmu,

416     __ldl_mmu,

417     __ldq_mmu,

418 };

These functions __ldb_mmu/__ldw_mmu are defined in softmmu_template.h.

softmmu_tempate.h


DATA_TYPE REGPARM glue(glue(__ld, SUFFIX), MMUSUFFIX)(target_ulong addr,

int mmu_idx)

In sum, function gen_ldst outputs the OPC(INDEX_op_qemu_ld32u) to gen_opc_ptr and tcg_out_op will generates the target binary according to the OPC. In the lw instruction emulation, it will generate the x86 binary calls the functions in softmmu_template.h.




qemu internal part 2: softmmu


Qemu uses softmmu to accelerate the process of finding the mapping between guest physical address and host virtual address and the mapping between guest I/O region and qemu I/O emulation functions. In this article, I assume the guest page table size is 4K.


1. the two level guest physical page descriptor table


Qemu uses a two level guest physical page descriptor table to maintain the guest memory space and MMIO space. The table is pointed by l1_phys_map. Bits [31:22] is used to index first level entry and bits [21:12] is used to index the second level entry. The entry of the second level table is PhysPageDesc.


exec.c


146 typedef struct PhysPageDesc {

147     /* offset in host memory of the page + io_index in the low bits */

148     ram_addr_t phys_offset;

149     ram_addr_t region_offset;

150 } PhysPageDesc;

If the memory region is RAM, then the bits [31:12] of phys_offset means the offset of this page in emulated physical memory. If the memory region is memory mapped I/O, then the bits of [11:3] of phys_offset means the index in io_mem_write/io_mem_read array. When accessing this memory region, the functions in io_mem_write/io_mem_read of index phys_offset will be called.


2. register the guest physical memory


Function cpu_register_physical_memory is used to register a guest memory region. If phys_offset is IO_MEM_RAM then it means this region is guest RAM space. If the phys_offset >IO_MEM_ROM, then it means this memory region is MMIO space.


898 static inline void cpu_register_physical_memory(target_phys_addr_t start_addr,

899                                                 ram_addr_t size,

900                                                 ram_addr_t phys_offset)

901 {

902     cpu_register_physical_memory_offset(start_addr, size, phys_offset, 0);

903 }

Function cpu_register_physical_memory_offset will first find the PhysPageDesc in table l1_phys_map using the given guest physical address. If finding the entry, qemu will update the entry. If not finding the entry, then qemu creates a new entry and updates its value and insert this entry to the table at last.


In malta emulation, the following is the code to register malta RAM space.


hw/mips_malta.c


811     cpu_register_physical_memory(0, ram_size, IO_MEM_RAM);

3. register the mmio space


Before registering mmio space using cpu_register_physical_memory, qemu uses the function cpu_register_io_memory to register the I/O emulation functions to array io_mem_write/io_mem_read.


exec.c


2851 int cpu_register_io_memory(int io_index,

2852                            CPUReadMemoryFunc **mem_read,

2853                            CPUWriteMemoryFunc **mem_write,

2854                            void *opaque)

This function will return the index in array io_mem_write/io_mem_read and this index will be passed to function cpu_register_physical_memory via parameter phys_offset.


hw/mips_malta.c


malta = cpu_register_io_memory(0, malta_fpga_read,

                                   malta_fpga_write, s);


cpu_register_physical_memory(base, 0x900, malta);

4. softmmu


Given the guest virtual address, how does qemu find the corresponding host virtual address? First qemu needs to translate the guest virtual address to guest physical address. Then qemu needs to find the PhysPageDesc entry in table l1_phys_map and get the phys_offset. At last qemu should add phys_offset to phys_ram_base to get the host virtual address.


Qemu uses a softmmu model to speed up this process. Its main idea is storing the offset of guest virtual address to host virtual address in a TLB table. When translating the guest virtual address to host virtual address, it will search this TLB table firstly. If there is an entry in the table, then qemu can add this offset to guest virtual address to get the host virtual address directly. Otherwise, it needs to search the l1_phys_map table and then fill the corresponding entry to the TLB table. The index of this TLB table is bits [19:12] of guest virtual address and there is no asid field in tlb entry. This means the TLB table needs to be flushed in process switch!


This TLB table idea is just like the most traditional hardware TLB. However, to MIPS cpu, there is another mmu model in qemu. Unlike x86 cpu, MIPS does NOT care about hardware page table. Instead it uses hardware TLB which is NOT transparent to software. Maybe It is another topic I will explain in another article. What we need to understand here is that the softmmu model in this article is not the mmu model of MIPS cpu itself.


Moreover, besides helping speed up the process of translating guest virtual address to host virtual address, this softmmu model can speed up the process of dispatching I/O emulation functions according to guest virtual address too. In this case, the idex of I/O emulation functions in io_mem_write/io_mem_read is stored in iotlb.


The format of TLB entry is as flowing:


cpu-defs.h


176     CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE];                  \

177     target_phys_addr_t iotlb[NB_MMU_MODES][CPU_TLB_SIZE];


108 typedef struct CPUTLBEntry {

109     /* bit TARGET_LONG_BITS to TARGET_PAGE_BITS : virtual address

110        bit TARGET_PAGE_BITS-1..4  : Nonzero for accesses that should not

111                                     go directly to ram.

112        bit 3                      : indicates that the entry is invalid

113        bit 2..0                   : zero

114     */

115     target_ulong addr_read;

116     target_ulong addr_write;

117     target_ulong addr_code;

124     target_phys_addr_t addend;

131 } CPUTLBEntry;

Field addr_read/write/code stores the guest virtual address for TLB entry. It is the tag of this entry. Filed addend is the offset of host virtual address to guest virtual address. We can add this value to guest virtual address to get the host virtual address.


addend = host_virtual_address – guest_virtual_address


host_virtual_address = phys_ram_base(qemu variable) + guest_physical_address – guest_physical_address_base(0 in MIPS)

The iotlb stores the index of I/O emulation function in io_mem_write/io_mem_read.


Function __ldb_mmu/__ldl_mmu/__ldw_mmu is used to translating the guest virtual address to host virtual address or dispatching guest virtual address to I/O emulation functions.


softmmu_template.h


86 DATA_TYPE REGPARM glue(glue(__ld, SUFFIX), MMUSUFFIX)(target_ulong addr,

87                                                       int mmu_idx)

88 {

89     DATA_TYPE res;

90     int index;

91     target_ulong tlb_addr;

92     target_phys_addr_t addend;

93     void *retaddr;

94

95     /* test if there is match for unaligned or IO access */

96     /* XXX: could done more in memory macro in a non portable way */

97     index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);

98  redo:

99     tlb_addr = env->tlb_table[mmu_idx][index].ADDR_READ;

100     if ((addr & TARGET_PAGE_MASK) == (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {

101         if (tlb_addr & ~TARGET_PAGE_MASK) {

102             /* IO access */

103             if ((addr & (DATA_SIZE - 1)) != 0)

104                 goto do_unaligned_access;

105             retaddr = GETPC();

106             addend = env->iotlb[mmu_idx][index];

107             res = glue(io_read, SUFFIX)(addend, addr, retaddr);

108         } else if (((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1) >= TARGET_PAGE_SIZE) {

109             /* slow unaligned access (it spans two pages or IO) */

110         do_unaligned_access:

111             retaddr = GETPC();

112 #ifdef ALIGNED_ONLY

113             do_unaligned_access(addr, READ_ACCESS_TYPE, mmu_idx, retaddr);

114 #endif

115             res = glue(glue(slow_ld, SUFFIX), MMUSUFFIX)(addr,

116                                                          mmu_idx, retaddr);

117         } else {

118             /* unaligned/aligned access in the same page */

119 #ifdef ALIGNED_ONLY

120             if ((addr & (DATA_SIZE - 1)) != 0) {

121                 retaddr = GETPC();

122                 do_unaligned_access(addr, READ_ACCESS_TYPE, mmu_idx, retaddr);

123             }

124 #endif

125             addend = env->tlb_table[mmu_idx][index].addend;

126             res = glue(glue(ld, USUFFIX), _raw)((uint8_t *)(long)(addr+addend));

127         }

128     } else {

129         /* the page is not in the TLB : fill it */

130         retaddr = GETPC();

131 #ifdef ALIGNED_ONLY

132         if ((addr & (DATA_SIZE - 1)) != 0)

133             do_unaligned_access(addr, READ_ACCESS_TYPE, mmu_idx, retaddr);

134 #endif

135         tlb_fill(addr, READ_ACCESS_TYPE, mmu_idx, retaddr);

136         goto redo;

137     }

138     return res;

139 }

In this function, it will get the index of TLB table and compare the guest virtual address with the address stored in this tlb entry(line 97-100). If these two addresses match, it means this guest virtual address hits the tlb entry. Then qemu will determine this virtual address is a MMIO address or RAM address. If it is a MMIO address, get the index of IO emulation functions from env->iotlb and call these functions(line 103-117). If it is a RAM space, add the guest virtual address to addend to get the host virtual address(line 118-128). If there is no matched tlb entry, then fietch the entry from table l1_phys_map and insert the entry to tlb table(line 135).


5. an example


When fetching code from guest memory, the whole code path is as flowing:


cpu_exec->tb_find_fast->tb_find_slow->get_phys_addr_code

->(if tlb not match)ldub_code(softmmu_header.h)->__ldl_mmu(softmmu_template.h)

->tlb_fill->cpu_mips_handle_mmu_fault->tlb_set_page->tlb_set_page_exec




qemu internal part 3: memory watchpoint


In qemu there is an amazing feature – memory watchpoint. It can watch all the memory access including memory read, write or both of them. When guest os/application touches the memory region watched by qemu, a registered function will be called and you can do everything as you want in this function. The gdb stub in qemu uses it to implement the memory watch command.


The implemention of memory watchpoint is tricky in qemu. In last article of qemu internal, we know that when emulating memory access, qemu needs to distinguish the normal RAM read/write from memory mapped I/O read/write. If it is a memory mapped I/O address access, qemu will dispatch this access to the registered I/O emulation functions. Qemu use this mechanism to implement the memory watchpoint. When accessing the memory address watched by qemu, qemu will dispatch this access to the registered memory watch functions, even if this address is normal guest RAM address or memory mapped I/O address! Qemu will do all the magic things in these memory watch functions.


In the following, I will use an example to explain the whole process of memory watch implement of qemu.


80103c60 :

80103c60:       00801021        move    v0,a0

80103c64 <__copy_user>:

80103c64:       2cca0004        sltiu   t2,a2,4

80103c68:       30890003        andi    t1,a0,0x3

80103c6c:       15400068        bnez    t2,80103e10 <__copy_user+0x1ac>

80103c70:       30a80003        andi    t0,a1,0x3

80103c74:       1520003d        bnez    t1,80103d6c <__copy_user+0x108>

80103c78:       00000000        nop

80103c7c:       15000046        bnez    t0,80103d98 <__copy_user+0x134>

80103c80:       00064142        srl     t0,a2,0x5

80103c84:       11000017        beqz    t0,80103ce4 <__copy_user+0x80>

80103c88:       30d8001f        andi    t8,a2,0x1f

80103c8c:       00000000        nop

80103c90:       8ca80000        lw      t0,0(a1)

These asm lines are objdumped from linux 2.6.30 kernel for mips malta. Assume that I want to watch the memory access of virtual address 0x804cd000(swapper_pg_dir in linux kernel).


First I insert the watchpoint into cpu.


cpu_watchpoint_insert(env, 0x804cd000, 4, BP_GDB | BP_MEM_ACCESS,

                        NULL);

And then I need to register the vm state changing call back functions.


qemu_add_vm_change_state_handler(spy_vm_state_change, NULL);

If register a1=0x804cd000, guest linux kernel will touch the watched memory region when pc is 0x80103c90, then qemu dispatches this access to the registered memory watch function, even if this access is a noram guest RAM access.The memory watch functions in qemu are in array watch_mem_read/watch_mem_write.


exec.c


2649 static CPUReadMemoryFunc *watch_mem_read[3] = {

2650     watch_mem_readb,

2651     watch_mem_readw,

2652     watch_mem_readl,

2653 };

2654

2655 static CPUWriteMemoryFunc *watch_mem_write[3] = {

2656     watch_mem_writeb,

2657     watch_mem_writew,

2658     watch_mem_writel,

2659 };

In function watch_mem_readl, it will call function check_watchpoint first.


exec.c


2622 static uint32_t watch_mem_readl(void *opaque, target_phys_addr_t addr)

2623 {

2624     check_watchpoint(addr & ~TARGET_PAGE_MASK, ~0x3, BP_MEM_READ);

2625     return ldl_phys(addr);

2626 }


2563 static void check_watchpoint(int offset, int len_mask, int flags)

2564 {

2565     CPUState *env = cpu_single_env;

2566     target_ulong pc, cs_base;

2567     TranslationBlock *tb;

2568     target_ulong vaddr;

2569     CPUWatchpoint *wp;

2570     int cpu_flags;

2571

2572     if (env->watchpoint_hit) {

2573         /* We re-entered the check after replacing the TB. Now raise

2574          * the debug interrupt so that is will trigger after the

2575          * current instruction. */

2576         cpu_interrupt(env, CPU_INTERRUPT_DEBUG);

2577         return;

2578     }

2579     vaddr = (env->mem_io_vaddr & TARGET_PAGE_MASK) + offset;

2580     TAILQ_FOREACH(wp, &env->watchpoints, entry) {

2581         if ((vaddr == (wp->vaddr & len_mask) ||

2582              (vaddr & wp->len_mask) == wp->vaddr) && (wp->flags & flags)) {

2583             wp->flags |= BP_WATCHPOINT_HIT;

2584             if (!env->watchpoint_hit) {

2585                 env->watchpoint_hit = wp;

2586                 tb = tb_find_pc(env->mem_io_pc);

2587                 if (!tb) {

2588                     cpu_abort(env, "check_watchpoint: could not find TB for "

2589                               "pc=%p", (void *)env->mem_io_pc);

2590                 }

2591                 cpu_restore_state(tb, env, env->mem_io_pc, NULL);

2592                 tb_phys_invalidate(tb, -1);

2593                 if (wp->flags & BP_STOP_BEFORE_ACCESS) {

2594                     env->exception_index = EXCP_DEBUG;

2595                 } else {

2596                     cpu_get_tb_cpu_state(env, &pc, &cs_base, &cpu_flags);

2597                     tb_gen_code(env, pc, cs_base, cpu_flags, 1);

2598                 }

2599                 cpu_resume_from_signal(env, NULL);

2600             }

2601         } else {

2602             wp->flags &= ~BP_WATCHPOINT_HIT;

2603         }

2604     }

2605 }

When check_watchpoint is executed in the first time, env->watchpoint_hit is null. Then it will check whether the address is a watched address. If so, set the flag BP_WATCHPOINT_HIT in wp->flags(line 2583) and set env->watchpoint_hit to wp. Then it will find and invalidate the current translation block(line 2586-2592). If the flag BP_STOP_BEFORE_ACCESS in wp is not set, then qemu will translate the code from current pc(line 2596-2597) and resume the guest instruction emulation(line 2599). Function cpu_resume_from_signal will jump to line 256 in cpu-exec.c and rerun the emulation process from the lw instruction(pc=0x80103c90).


cpu-exec.c


255     for(;;) {

256         if (setjmp(env->jmp_env) == 0) {

257             env->current_tb = NULL;

258             /* if an exception is pending, we execute it here */

259             if (env->exception_index >= 0) {

260                 if (env->exception_index >= EXCP_INTERRUPT) {

261                     /* exit request from the cpu execution loop */

262                     ret = env->exception_index;

263                     if (ret == EXCP_DEBUG)

264                         cpu_handle_debug_exception(env);

265                     break;

266                 } else {

Why do qemu need to invalidate current translation block and regenerate the code? Because this memory access(pc=0x80103c90) is in the middle of a translation block. If we want to rerun this instruction, we need to regenerate the code from this instruction(pc=0x80103c90). Moreover before invalidating the translation block, qemu needs to sync the cpu state to guest cpu(cpu_restore_state). That’s because the cpu state in the middle of translation block is different from the actual cpu state. Understanding this process needs some knowledge of binary translation. If you find it is hard to understand, just ignore it.


Now qemu rerun the guest os from pc=0x80103c90. Because the memory address is a watched memory address, qemu will call watch_mem_readl->check_watchpoint again. But this time, env->watchpoint_hit is not null(qemu set it in last call), then it will call cpu_interrupt and return from function check_watchpoint. Then in watch_mem_readl it will call ldl_phys to fetch the value from guest RAM. Function cpu_interrupt in check_watchpoint sets the CPU_INTERRUPT_DEBUG to flag to env->interrupt_request.


Then qemu runs normally just like nothing has happened. Because the CPU_INTERRUPT_DEBUG has been set in env->interrupt_request, the main loop of cpu emulation will return.


cpu-exec.c


355                     if (interrupt_request & CPU_INTERRUPT_DEBUG) {

356                         env->interrupt_request &= ~CPU_INTERRUPT_DEBUG;

357                         env->exception_index = EXCP_DEBUG;

358                         cpu_loop_exit();

359                     }


54 void cpu_loop_exit(void)

55 {

56     /* NOTE: the register at this point must be saved by hand because

57        longjmp restore them */

58     regs_to_env();

59     longjmp(env->jmp_env, 1);

60 }

Function cpu_loop_exit will do longjmp to line 256 in cpu-exec.c. Because env->exception_index is EXCP_DEBUG, it will break from the loop of function cpu_exec. Function cpu_exec returns to main_loop in vl.c.


vl.c


3800                 ret = cpu_exec(env);


3850             if (unlikely(ret == EXCP_DEBUG)) {

3851                 gdb_set_stop_cpu(cur_cpu);

3852                 vm_stop(EXCP_DEBUG);

3853             }

It will call gdb_set_stop_cpu and then vm_stop to stop the qemu. It the virtual state is changed, qemu will the call the callback functions registered by qemu_add_vm_change_state_handler. So the function spy_vm_state_change will be called.


In sum, when accessing the watched memory address, the memory watch functions will be called. It will call function check_watchpoint. Function check_watchpoint will set env->watchpoint_hit to current watchpoint and rerun the guest os/applicaton from current pc. Then memory watched functions will be called again. It will call function check_watchpoint. This time, function check_watchpoint just set the flag in env->interrupt_request and tells cpu to interrupt the emulation process. And then qemu will return to the main_loop and stop the vm. At last it will call the registered vm change state callback functions.

'Programming' 카테고리의 다른 글

Building KVM from source in Ubuntu  (1) 2014.02.13
Linux Kernel compile and update  (0) 2014.01.07
Secret of SET_FS and KERNEL_DS in Linux Kernel  (0) 2013.12.31
Apache2 SSL Configuration  (0) 2013.11.28
Settingup ARMv7 environment with QEMU  (3) 2013.10.24