看板 DFBSD_submit 關於我們 聯絡資訊
On Sun, May 02, 2004 at 03:07:52PM -0700, Matthew Dillon wrote: > Well, I guess I can't convince people to change the counting scheme. > Oh well, I tried :-) I guess you can go ahead and put it on a commit > track. You've certainly done the work! Matt, consider the following lock situation: --rrx-xrr-rrr- You want to acquire: -----r-------- ==> one range needed -----x-------- ==> no range needed -----rr------- ==> one range can be freed (not absolute sure, I got this case) ---xxxxx------ ==> two ranges can be freed -----------w-- ==> two additional locks needed Esp. the forth case is difficult two handle if you want to merging allocation and refcounting and still avoid a second run the list. > I noticed one issue with the code, and that is the fact that since the > malloc() can block and the locking operations appear to be called > without a locked vnode (which is how we want to call them), there is > a race where the lock list can get corrupted if malloc() blocks. Thanks, yes. I'll fix that in the second iteration, the patch is complex enough for now and the problem isn't new. How you want to protect this in the long run? IMO locking either the inode or the lockf is necessary, but this can of course be a simple token. > I think the solution is to pre-allocate two lock structures at the > beginning and then free whichever ones we wind up not using at > the end. If you are worried about performance, you can implement a > simple linked list in the per-cpu globaldata to cache free structures. > For a first implementation, though, I'd simply use malloc(). I thought about adding a reference count to lockf and increment it on all changes, but your solution is cleaner :) Joerg > -Matt