[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [E-devel] cvs, servers and stuff.
On Wed, 16 Aug 2006 08:07:16 +0900
Carsten Haitzler (The Rasterman) <firstname.lastname@example.org> wrote:
> actually - i think we need to know how this works WITH data on disk - why? some
> scm's may invoke much more disk IO than others and thus bottleneck at the disk
> earlier than others. we need to know. unless you mean save on the client side -
> then thats fine. but its server-side we really care about here - remember :)
Well ... it was client side, but it didn't work as I couldn't find a way to send the files to the bit bucket with either svn or git.
> word seems to have it that git is "da shitsnizzle" when it comes to performance
> - but i am going to want to see the numbers. how many clients can connect and
> checkout and/or update and how long does it take vs. the load on the server etc.
Well I oficially declare my numbers crap. I hold my laptop responsible for this mess and I don't really trust a single number here. And I'll explain why in a minute.
Here are the numbers, but as I said, I really don't think they should be considered usefull. Consider this to be FUD.
- CPU Load: Max
- Mem Load: 45%
- Checkout Time: 296.374s
- CPU Load: 20% ( It was 20% in the last test as well I believe, all processes seem to share the same CPU usage.)
- Mem Load: 40%
- Checkout Time: 658.576
- CPU Load: 70-90%
- Mem Load: 60%
- Checkout Time: 874.618
GIT (git protocol):
- CPU Load: Max
- Mem Load: 10-15%
- Checkout Time: 2345.243
- CPU Load: < 5%
- Mem Load: 10-12%
- Checkout Time: A lot, from what I could see, it was cloning at a very low rate.
I used actual disk access on both the client and the server, Apache2-mpm-worker, SVN with FSFS and that's about it. A spec of machines would be missing the point. The point is they aren't any good for these kinds of "benchmarks".
It looks really suspicious to me that times seem to increase linearly among the tested SCMs, even though this wasn't the case in the first test.
Also, I often found network traffic dropping to 0 because the laptop's HDD couldn't take the heat anymore.
Also, the version of the script I tested with did not wait after one SCM finished and moved directly to the next one. I don't think that was a good idea.
Now, Git with http was excellent in terms of CPU & Memory usage but I'm not really sure why it wouldn't go past 1.5MB/s (ever). It would have probably taken me an hour or so, and that's on a LAN which is pretty much unacceptable.
But here's the script. Feel free to change the number of threads at the top of the script (it's currently at 20) but please make note that you need about 180MBx number of threads available.
Change the lines near the top to specify how the checkout commands should be run in your environment. If you want to add another test but lack the Ruby know-how just tell me what you need and I'll patch it up for you.
Finally make a new directory, copy the script there and execute 'ruby scm_benchmark.rb'. And that's it.
So ultimately I urge someone with more patience and two solid machines to give my little test a spin. Please don't make any decision based on my data. Pretty please! And Carsten, if this doesn't turn you on, I'm sorry ;)