Interrupts in ide.c
Explain in a few sentences why the kernel panicked. You may find it useful to look up the stack trace (the sequence of %eip values printed by panic) in the kernel.asm listing.
更改ide.c中的iderw函数,试了四五次,终于panic了
❯ make qemu
qemu-system-i386 -serial mon:stdio -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp 2 -m 512
xv6...
cpu1: starting 1
cpu0: starting 0
lapicid 1: panic: sched locks
80103ca1 80103e12 80105a87 8010575c 801022b7 80100191 801014e5 8010155f 801037c4 8010575f
以下为执行顺序:trapasm.S: trapret
-> proc.c: forkret
-> fs.c: iinit
-> fs.c: readsb
-> bio.c: bread
-> ide.c: iderw
-> trapasm.S: alltraps
-> trap.c: trap
-> proc.c: yield
-> proc.c: sched
可知在启动第一个用户进程时,执行到iderw()
时(推测是在sti()
后,cli()
前)发生定时器中断,然后进行调度,由于ncli
不为1,在sched()
里panic了
Interrupts in file.c
Explain in a few sentences why the kernel didn't panic. Why do file_table_lock and ide_lock have different behavior in this respect?
不会panic的原因可能是因为acquire()
和release()
之间的时间太短了,都没来得及发生定时器中断
xv6 lock implementation
Why does release() clear lk->pcs[0] and lk->cpu before clearing lk->locked? Why not wait until after?
可能会有如下情况,cpu0上的线程将lk->locked清零时,正在acquire()
等待的cpu1立即取到了lk,然后会更改lk->cpu和lk->pcs[0],而cpu0此时也在更改lk->cpu和lk->pcs[0],这就造成了数据竞争。
// Release the lock.
void
release(struct spinlock *lk)
{
if(!holding(lk))
panic("release");
lk->pcs[0] = 0;
lk->cpu = 0;
// __sync_synchronize();使得上面的代码对内存的操作与下面的代码不放在一起
// 这样可确保对临界区的访问不会在释放锁后
__sync_synchronize();
// lk->locked = 0 可能不是原子操作,所以用汇编
asm volatile("movl $0, %0" : "+m" (lk->locked) : );
popcli();
}