[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [E-devel] cvs, servers and stuff.

On Tue, 15 Aug 2006 16:55:50 +0100
Shish <shish@shish.is-a-geek.net> wrote:

> In real life disks will be involved, it'd probably be good to take
> them into account. (I'd recommend doing both, to see what effect the
> > I'll grab it 100 times with my laptop
> I was thinking a more appropriate benchmark would be to see how many
> parallel checkouts can be done at once before performance starts
> dropping
>     -- Shish

The problem comes from the fact that I can't really limit the connection speed in a process-based way. 

Basically, when I try to run x threads simultaneously the hdd will pull the speed of the entire process down and you have basically no way of telling which SCM is faster. 

They would probably all finish at once, or close to that, because they're limited by the speed of the device. 

Now I'd love to be able to run multiple threads simultaneously with tmpfs but this is a big repository and I haven't got the RAM to duplicate it that much.

This benchmark is, well, a benchmark, so a big ball of salt should be carried around.

But I tend to believe that at least in terms of CPU, the load on the server is equivalent if you have one client churning away at 4MB/s or a lot of smaller ones doing the same thing.

It's really the memory that is affected but I understood this wasn't really the issue.

The script is almost done, all I have to setup now is the git repository and I'm done.

I have a problem because I didn't get all the cvs (just e17) and I don't have the history information anymore (you can't keep it unless you use a mechanism like rsync). 

I hope this isn't an issue but I will post the script along with the results and I encourage anyone with enough time on their hands to run the test on their own machines/network.