看板 DFBSD_submit 關於我們 聯絡資訊
> But, consider where some of these problems reside. From ufs/fs.h: > > /* > * Super block for an FFS file system. > */ > struct fs { > ... > int32_t fs_size; /* number of blocks in fs */ > int32_t fs_dsize; /* number of data blocks in fs */ > int32_t fs_ncg; /* number of cylinder groups */ > int32_t fs_bsize; /* size of basic blocks in fs */ > int32_t fs_fsize; /* size of frag blocks in fs */ > int32_t fs_frag; /* number of frags in a block in fs */ > ... > > None of these can ever be below zero, but they all must be signed for > the sake of compatibility. Without having looked at this stuff beyond newfs, it sounds to me like the fundamental problem is the lack of a layer of abstraction between the on-disk format and the format used for internal processing in the tools/kernel. In my (not so informed, given my lack of experience with kernel/os programming) opinion the proper fix would be to abstract away this whole thing to deal with data types and structures that make logical and practical sence (e.g., uint64_t[1] to remain future proof). One would then have properly defined and *explicit* bounds checking where it can be easily verified what is allowed and what isn't. The limits of the underlying structure (on-disk for example) then becomes an implementation detail rather than a design issue that prevents proper writing of utilities. [1] Or some generice size typedef; but then one run into a whole set of problems with printf() and such not being independent of the actual types. -- / Peter Schuller, InfiDyne Technologies HB PGP userID: 0xE9758B7D or 'Peter Schuller <peter.schuller@infidyne.com>' Key retrieval: Send an E-Mail to getpgpkey@scode.org E-Mail: peter.schuller@infidyne.com Web: http://www.scode.org