[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [E-devel] cvs, servers and stuff.



On Tuesday, 15 August 2006, at 22:12:47 (+0300),
Eugen Minciu wrote:

> CVS: 
> - Average checkout time: 41.843s
> - CPU used: Constantly around 70% (something like 60-80%
> - MEM used: 2-3%
> 
> SVN (svnserve):
> - Average checkout time: 27.921s
> - CPU used: 50-90%
> - Mem used: 2%
> 
> SVN (http):
> - Average checkout time: 61.239s
> - CPU used: 70-90%
> - Mem used: 10-12% *
> 
> Git/HTTP:
> - Average checkout time: 98.962s
> - CPU used: < 4%
> - Mem used: 10-12 *

Finally!  Some useful data.

> So this looks very weird. And at the end of the day it doesn't
> really prove too much. SVN and Git (particularly Git) are really
> gentle on the server's resources at the expense of higher download
> times.

I agree regarding git, but not SVN.  60-80% averages out to around
70%, and so does 50-90%.  So it's not really any better on resources
than CVS, and SVN+HTTP is significantly worse (which is as I predicted
and expected).

Was this with the BDB or FS backend?

Can you try multiple invocations of the script from the same host?
Like say 20 or 30 simultaneous checkouts?

I'd also be interested in comparisons of incremental checkouts.  As I
understand it, Git trees are complete repositories, not just
checkouts.  So a checkout-to-checkout comparison is unfair as Git is
downloading a crapload more data (history and such).

Thanks!
Michael

-- 
Michael Jennings (a.k.a. KainX)  http://www.kainx.org/  <mej@kainx.org>
n + 1, Inc., http://www.nplus1.net/       Author, Eterm (www.eterm.org)
-----------------------------------------------------------------------
 "Nerds make the best lovers.  That's why I'm in Speed School." 
                                                       -- Angela Smith