發信人Venkatesh Srinivas <me@endeavour.zapto.org>,
看板DFBSD_kernel
標 題Re: "Benchmarking BSD and Linux"
發信站(null) (Thu Mar 10 06:35:22 2011)
轉信站ptt!crater_reader.dragonflybsd.org!crater.dragonflybsd.org!127.0.0.1.M
--20cf3005dd2c36dfe9049e188ac3
Content-Type: text/plain; charset=UTF-8
On Wed, Mar 9, 2011 at 7:02 PM, Ezequiel R. Aguerre <ezeaguerre@gmail.com>wrote:
> Hi! First time writing here :-)
>
> I would love to see the curve. What do you mean by "much flatter", do
> you mean that it looks like O(1) instead of O(n) (or something like
> that)?
>
> And I can't understand why caching all those data structures are not a
> good idea in general.
>
> Free up memory for other purposes: I think it should be fairly easy to
> free up that memory.
> * A lot of those structures probably don't have sensitive
> information, so no need to zero fill (??)
> * And if you do need to zero fill pages you could (maybe) have a
> "cache" of zero filled pages, so you could reclaim other pages and
> zero fill them as a background process. Yes... it is other cache...
> but other operating systems have used this technique before, and I
> think it worked pretty well.
> * By using something like a SLAB allocator you could probably just
> free entire pages of memory in one simple and quick operation when you
> do need to reclaim memory from the caches.
>
> KVM fragmentation: Well, how much? is it really important? and if it
> is... is it worse that the lack of the caches?
>
> Aren't the memory management subsystems based on SLAB allocator
> basically a bunch of caches? For example, Linux, FreeBSD, Solaris...
> all of them use a SLAB allocator, and all of them have good
> performance. I don't really know MUCH about their internal workings,
> but I can say that Linux even uses the SLAB for the kernel general
> purpose allocator (kmalloc). Regarding fragmentation... I think that
> using extensively the SLAB, in fact, reduces fragmentation.
> And this reminds me that FreeBSD introduced the slab allocator in
> FreeBSD 5.0 (the one benchmarked is the 5.1).
>
> So, I can't understand why caching those data structures are a bad
> idea for a production system.
>
> Have a nice day! :-)
>
> --
> Ezequiel R. Aguerre
>
Hi!
There's a lot to reply to here, so I'll save it for tomorrow when I'm more
awake :) ; the questions are very good though.
But graphs:
http://leaf.dragonflybsd.org/~vsrinivas/forkpy.png : 2/10 kernel on test29
(Dillon's Phenom II, 64-bit)
http://leaf.dragonflybsd.org/~vsrinivas/forkp.png : yesterday, with the

thread and thread caches at 3000.
Scaled the same way.
-- vs
--20cf3005dd2c36dfe9049e188ac3
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<div class=3D"gmail_quote">On Wed, Mar 9, 2011 at 7:02 PM, Ezequiel R. Ague=
rre <span dir=3D"ltr"><<a href=3D"mailto:ezeaguerre@gmail.com">ezeaguerr=
e@gmail.com</a>></span> wrote:<br><blockquote class=3D"gmail_quote" styl=
e=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi! First time writing here :-)<br>
<br>
I would love to see the curve. What do you mean by "much flatter"=
, do<br>
you mean that it looks like O(1) instead of O(n) (or something like<br>
that)?<br>
<br>
And I can't understand why caching all those data structures are not a<=
br>
good idea in general.<br>
<br>
Free up memory for other purposes: I think it should be fairly easy to<br>
free up that memory.<br>
=C2=A0* A lot of those structures probably don't have sensitive<br>
information, so no need to zero fill (??)<br>
=C2=A0* And if you do need to zero fill pages you could (maybe) have a<br>
"cache" of zero filled pages, so you could reclaim other pages an=
d<br>
zero fill them as a background process. Yes... it is other cache...<br>
but other operating systems have used this technique before, and I<br>
think it worked pretty well.<br>
=C2=A0* By using something like a SLAB allocator you could probably just<br=
>
free entire pages of memory in one simple and quick operation when you<br>
do need to reclaim memory from the caches.<br>
<br>
KVM fragmentation: Well, how much? is it really important? and if it<br>
is... is it worse that the lack of the caches?<br>
<br>
Aren't the memory management subsystems based on SLAB allocator<br>
basically a bunch of caches? For example, Linux, FreeBSD, Solaris...<br>
all of them use a SLAB allocator, and all of them have good<br>
performance. I don't really know MUCH about their internal workings,<br=
>
but I can say that Linux even uses the SLAB for the kernel general<br>
purpose allocator (kmalloc). Regarding fragmentation... I think that<br>
using extensively the SLAB, in fact, reduces fragmentation.<br>
And this reminds me that FreeBSD introduced the slab allocator in<br>
FreeBSD 5.0 (the one benchmarked is the 5.1).<br>
<br>
So, I can't understand why caching those data structures are a bad<br>
<div class=3D"im">idea for a production system.<br>
<br>
</div>Have a nice day! :-)<br>
<br>
--<br>
<font color=3D"#888888">Ezequiel R. Aguerre<br>
</font></blockquote></div><br>Hi!<br><br>There's a lot to reply to here=
, so I'll save it for tomorrow when I'm more awake :) ; the questio=
ns are very good though.<br><br>But graphs:<br><a href=3D"
http://leaf.drago=
nflybsd.org/~vsrinivas/forkpy.png">
http://leaf.dragonflybsd.org/~vsrinivas/=
forkpy.png</a> : 2/10 kernel on test29 (Dillon's Phenom II, 64-bit)<br>
<a href=3D"
http://leaf.dragonflybsd.org/~vsrinivas/forkp.png">http://leaf.d=
ragonflybsd.org/~vsrinivas/forkp.png</a> : yesterday, with the thread and t=
hread caches at 3000.<br><br>Scaled the same way.<br>-- vs<br>
--20cf3005dd2c36dfe9049e188ac3--