summaryrefslogtreecommitdiff
path: root/entryother.S
diff options
context:
space:
mode:
authorFrans Kaashoek <[email protected]>2018-09-23 08:24:42 -0400
committerFrans Kaashoek <[email protected]>2018-09-23 08:35:30 -0400
commitab0db651af6f1ffa8fe96909ce16ae314d65c3fb (patch)
treec429f8ee36fa7da1e25f564a160b031613ca05e9 /entryother.S
parentb818915f793cd20c5d1e24f668534a9d690f3cc8 (diff)
downloadxv6-labs-ab0db651af6f1ffa8fe96909ce16ae314d65c3fb.tar.gz
xv6-labs-ab0db651af6f1ffa8fe96909ce16ae314d65c3fb.tar.bz2
xv6-labs-ab0db651af6f1ffa8fe96909ce16ae314d65c3fb.zip
Checkpoint port of xv6 to x86-64. Passed usertests on 2 processors a few times.
The x86-64 doesn't just add two levels to page tables to support 64 bit addresses, but is a different processor. For example, calling conventions, system calls, and segmentation are different from 32-bit x86. Segmentation is basically gone, but gs/fs in combination with MSRs can be used to hold a per-core pointer. In general, x86-64 is more straightforward than 32-bit x86. The port uses code from sv6 and the xv6 "rsc-amd64" branch. A summary of the changes is as follows: - Booting: switch to grub instead of xv6's bootloader (pass -kernel to qemu), because xv6's boot loader doesn't understand 64bit ELF files. And, we don't care anymore about booting. - Makefile: use -m64 instead of -m32 flag for gcc, delete boot loader, xv6.img, bochs, and memfs. For now dont' use -O2, since usertests with -O2 is bigger than MAXFILE! - Update gdb.tmpl to be for i386 or x86-64 - Console/printf: use stdarg.h and treat 64-bit addresses different from ints (32-bit) - Update elfhdr to be 64 bit - entry.S/entryother.S: add code to switch to 64-bit mode: build a simple page table in 32-bit mode before switching to 64-bit mode, share code for entering boot processor and APs, and tweak boot gdt. The boot gdt is the gdt that the kernel proper also uses. (In 64-bit mode, the gdt/segmentation and task state mostly disappear.) - exec.c: fix passing argv (64-bit now instead of 32-bit). - initcode.c: use syscall instead of int. - kernel.ld: load kernel very high, in top terabyte. 64 bits is a lot of address space! - proc.c: initial return is through new syscall path instead of trapret. - proc.h: update struct cpu to have some scratch space since syscall saves less state than int, update struct context to reflect x86-64 calling conventions. - swtch: simplify for x86-64 calling conventions. - syscall: add fetcharg to handle x86-64 calling convetions (6 arguments are passed through registers), and fetchaddr to read a 64-bit value from user space. - sysfile: update to handle pointers from user space (e.g., sys_exec), which are 64 bits. - trap.c: no special trap vector for sys calls, because x86-64 has a different plan for system calls. - trapasm: one plan for syscalls and one plan for traps (interrupt and exceptions). On x86-64, the kernel is responsible for switching user/kernel stacks. To do, xv6 keeps some scratch space in the cpu structure, and uses MSR GS_KERN_BASE to point to the core's cpu structure (using swapgs). - types.h: add uint64, and change pde_t to uint64 - usertests: exit() when fork fails, which helped in tracking down one of the bugs in the switch from 32-bit to 64-bit - vectors: update to make them 64 bits - vm.c: use bootgdt in kernel too, program MSRs for syscalls and core-local state (for swapgs), walk 4 levels in walkpgdir, add DEVSPACETOP, use task segment to set kernel stack for interrupts (but simpler than in 32-bit mode), add an extra argument to freevm (size of user part of address space) to avoid checking all entries till KERNBASE (there are MANY TB before the top 1TB). - x86: update trapframe to have 64-bit entries, which is what the processor pushes on syscalls and traps. simplify lgdt and lidt, using struct desctr, which needs the gcc directives packed and aligned. TODO: - use int32 instead of int? - simplify curproc(). xv6 has per-cpu state again, but this time it must have it. - avoid repetition in walkpgdir - fix validateint() in usertests.c - fix bugs (e.g., observed one a case of entering kernel with invalid gs or proc
Diffstat (limited to 'entryother.S')
-rw-r--r--entryother.S57
1 files changed, 12 insertions, 45 deletions
diff --git a/entryother.S b/entryother.S
index a3b6dc2..3e502f3 100644
--- a/entryother.S
+++ b/entryother.S
@@ -13,11 +13,9 @@
#
# Startothers (in main.c) sends the STARTUPs one at a time.
# It copies this code (start) at 0x7000. It puts the address of
-# a newly allocated per-core stack in start-4,the address of the
-# place to jump to (mpenter) in start-8, and the physical address
+# a newly allocated per-core stack in start-12,the address of the
+# place to jump to (apstart32) in start-4, and the physical address
# of entrypgdir in start-12.
-#
-# This code combines elements of bootasm.S and entry.S.
.code16
.globl start
@@ -41,53 +39,22 @@ start:
# Complete the transition to 32-bit protected mode by using a long jmp
# to reload %cs and %eip. The segment descriptors are set up with no
# translation, so that the mapping is still the identity mapping.
- ljmpl $(SEG_KCODE<<3), $(start32)
+ ljmpl $(KCSEG32), $start32
-//PAGEBREAK!
-.code32 # Tell assembler to generate 32-bit code now.
+.code32
start32:
- # Set up the protected-mode data segment registers
- movw $(SEG_KDATA<<3), %ax # Our data segment selector
- movw %ax, %ds # -> DS: Data Segment
- movw %ax, %es # -> ES: Extra Segment
- movw %ax, %ss # -> SS: Stack Segment
- movw $0, %ax # Zero segments not ready for use
- movw %ax, %fs # -> FS
- movw %ax, %gs # -> GS
-
- # Turn on page size extension for 4Mbyte pages
- movl %cr4, %eax
- orl $(CR4_PSE), %eax
- movl %eax, %cr4
- # Use entrypgdir as our initial page table
- movl (start-12), %eax
- movl %eax, %cr3
- # Turn on paging.
- movl %cr0, %eax
- orl $(CR0_PE|CR0_PG|CR0_WP), %eax
- movl %eax, %cr0
+ movl $start-12, %esp
+ movl start-4, %ecx
+ jmp *%ecx
- # Switch to the stack allocated by startothers()
- movl (start-4), %esp
- # Call mpenter()
- call *(start-8)
-
- movw $0x8a00, %ax
- movw %ax, %dx
- outw %ax, %dx
- movw $0x8ae0, %ax
- outw %ax, %dx
-spin:
- jmp spin
-
-.p2align 2
+.align 4
gdt:
SEG_NULLASM
- SEG_ASM(STA_X|STA_R, 0, 0xffffffff)
- SEG_ASM(STA_W, 0, 0xffffffff)
-
+ SEG_ASM(0xa, 0, 0xffffffff)
+ SEG_ASM(0x2, 0, 0xffffffff)
+.align 16
gdtdesc:
- .word (gdtdesc - gdt - 1)
+ .word 0x17 # sizeof(gdt)-1
.long gdt