[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [E-devel] cvs, servers and stuff.
On Tue, 15 Aug 2006 19:11:57 +0300 Eugen Minciu <email@example.com> babbled:
> On Tue, 15 Aug 2006 16:55:50 +0100
> Shish <firstname.lastname@example.org> wrote:
> > In real life disks will be involved, it'd probably be good to take
> > them into account. (I'd recommend doing both, to see what effect the
> > > I'll grab it 100 times with my laptop
> > I was thinking a more appropriate benchmark would be to see how many
> > parallel checkouts can be done at once before performance starts
> > dropping
> > -- Shish
> The problem comes from the fact that I can't really limit the connection
> speed in a process-based way.
> Basically, when I try to run x threads simultaneously the hdd will pull the
> speed of the entire process down and you have basically no way of telling
> which SCM is faster.
in a real-life scm this will happen too. some scm's will hit disk harder than
others - this is a crucial measurement :)
> They would probably all finish at once, or close to that, because they're
> limited by the speed of the device.
> Now I'd love to be able to run multiple threads simultaneously with tmpfs but
> this is a big repository and I haven't got the RAM to duplicate it that much.
> This benchmark is, well, a benchmark, so a big ball of salt should be carried
> But I tend to believe that at least in terms of CPU, the load on the server
> is equivalent if you have one client churning away at 4MB/s or a lot of
> smaller ones doing the same thing.
nb - when i said load - i meant the whole kit & kaboodle. ie cpu, memory
footprint, context switch overhead, IO overhead, etc.
an scm that accesses only 100mb of disk data will only need 100mb of ram cache
to remain "fast". an scm that accesses 1000mb of "unique" disk data will eat up
much more cache space and thus degrade performance faster as repositories get
bigger. if it needs more malloc()ed ram to run - then that is less ram for
cache - etc. etc.
we need to know the overall performance as it would be on a long-running live
system (ie data is already loaded and cached after some initial
checkouts/updates) and it is humming along and now may get N clients wanting to
checkout and/or update. yes your network will be a limiting factor if you can
push 100mbit of traffic without a problem.
now if you can - try "slowing down" the server. if you can use speedstepping or
cpu throttling to artificially make it a slower system - use hdparm if you can
to slow the disk down. boot with a mem=X option to limit ram (and try it with
different levels). then "Relatively" the network will seem to get faster. this
of course is if it can handle pushing data at 100mbit without problems to many
clients and the network is our limiter... :)
> It's really the memory that is affected but I understood this wasn't really
> the issue.
> The script is almost done, all I have to setup now is the git repository and
> I'm done.
> I have a problem because I didn't get all the cvs (just e17) and I don't have
> the history information anymore (you can't keep it unless you use a mechanism
> like rsync).
> I hope this isn't an issue but I will post the script along with the results
> and I encourage anyone with enough time on their hands to run the test on
> their own machines/network.
> Using Tomcat but need to do more? Need to support web services, security?
> Get stuff done quickly with pre-integrated technology to make your job easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
> enlightenment-devel mailing list
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) email@example.com
Tokyo, Japan (東京 日本)