From 7241838b4cecefb32bad4698e748fc31d008d94d Mon Sep 17 00:00:00 2001 From: Frans Kaashoek Date: Tue, 20 Aug 2019 20:23:18 -0400 Subject: Move labs into 6.828 repo. The lab text isn't dependent on specific xv6 code. Lab submission instructions etc. are likely going to be more MIT 6.828 specific. --- labs/lock.html | 148 --------------------------------------------------------- 1 file changed, 148 deletions(-) delete mode 100644 labs/lock.html (limited to 'labs/lock.html') diff --git a/labs/lock.html b/labs/lock.html deleted file mode 100644 index 707d6c4..0000000 --- a/labs/lock.html +++ /dev/null @@ -1,148 +0,0 @@ - - -Lab: locks - - - - -

Lab: locks

- -

In this lab you will try to avoid lock contention for certain -workloads. - -

lock contention

- -

The program user/kalloctest stresses xv6's memory allocator: three - processes grow and shrink there address space, which will results in - many calls to kalloc and kfree, - respectively. kalloc and kfree - obtain kmem.lock. To see if there is lock contention for - kmem.lock replace the call to acquire - in kalloc with the following code: - -

-    while(!tryacquire(&kmem.lock)) {
-      printf("!");
-    }
-  
- -

tryacquire tries to acquire kmem.lock: if the - lock is taking it returns false (0); otherwise, it returns true (1) - and with the lock acquired. Your first job is to - implement tryacquire in kernel/spinlock.c. - -

A few hints: -

- -

Run usertests to see if you didn't break anything. Note that - usertests never prints "!"; there is never contention - for kmem.lock. The caller is always able to immediately - acquire the lock and never has to wait because some other process - has the lock. - -

Now run kalloctest. You should see quite a number of "!" on the - console. kalloctest causes many processes to contend on - the kmem.lock. This lock contention is a bit artificial, - because qemu is simulating 3 processors, but it is likely on real - hardware, there would be contention too. - -

Removing lock contention

- -

The root cause of lock contention in kalloctest is that there is a - single free list, protected by a single lock. To remove lock - contention, you will have to redesign the memory allocator to avoid - a single lock and list. The basic idea is to maintain a free list - per CPU, each list with its own lock. Allocations and frees on each - CPU can run in parallel, because each CPU will operate on a - different list. - -

The main challenge will be to deal with the case that one CPU runs - out of memory, but another CPU has still free memory; in that case, - the one CPU must "steal" part of the other CPU's free list. - Stealing may introduce lock contention, but that may be acceptable - because it may happen infrequently. - -

Your job is to implement per-CPU freelists and stealing when one - CPU is out of memory. Run kalloctest() to see if your - implementation has removed lock contention. - -

Some hints: -

- -

Run usertests to see if you don't break anything. - -

More scalabale bcache lookup

- - -

Several processes reading different files repeatedly will - bottleneck in the buffer cache, bcache, in bio.c. Replace the - acquire in bget with - -

-    while(!tryacquire(&bcache.lock)) {
-      printf("!");
-    }
-  
- - and run test0 from bcachetest and you will see "!"s. - -

Modify bget so that a lookup for a buffer that is in the - bcache doesn't need to acquire bcache.lock. This is more - tricky than the kalloc assignment, because bcache buffers are truly - shared among processes. You must maintain the invariant that a - buffer is only once in memory. - -

There are several races that bcache.lock protects -against, including: -

- -

A challenge is testing whether you code is still correct. One way - to do is to artificially delay certain operations - using sleepticks. test1 trashes the buffer cache - and exercises more code paths. - -

Here are some hints: -

- -

Check that your implementation has less contention - on test0 - -

Make sure your implementation passes bcachetest and usertests. - -

Optional: -

- - - - -- cgit v1.2.3