Board index » delphi » Benchmarker

Benchmarker


2006-01-05 10:11:29 AM
delphi75
Hi,
I am writing a benchmarking GUI type of thing to help
people decide what to use where.
A screenshot of what it looks like can be seen at
nntp://forums.borland.com/borland.public.attachments/9850
(obviously with non-sensical dummy data)
The idea is to have something that as many categories as
needed as well as the number of contestanting routines
(procedures and functions) without needing to alter the
GUI code; so that the GUI part will reusable.
I intend to let it export its data in various formats, including
those suitable for TDataset usage.
My question is: Is this a worthwhile effort --I mean, are you
guys likely to use it?
What other stuff would you want to see in it?
IOW, I need comments.
Cheers,
Adem
 
 

Re:Benchmarker

Hi Adem
Quote
My question is: Is this a worthwhile effort --I mean, are you
guys likely to use it?
No. We have it - sorry.
Regards
Dennis
 

Re:Benchmarker

Hi
IntToStr is based on Avatars generic B&V. Robert Lee made a generic B&V 3
years ago. My B&V template or another B&V s very easy to adapt for a new
B&V, just correct the function pointer and do a serach replace on the
function name eg. Pos ->PosEx. Such a convertsion takes less than 5
minutes. All validations and benchmarks have to be rewritten or modified
anyway. This work takes nearly all the time. I think that the value of
having a generic B&V is overrated.
Avatars B&V has a very nice enhancement over all of mine but one. It is
easier to register functions. I have done something similar in the
CharPosIEx B&V. I believe that the two B&V types are close in quality and
ease of use.
A generic B&V has to include function pointer casts which removes the
compiler typechecking ?
Never trade quality of the final B&V for a few minutes of saved time in the
creation of it.
Best regards
Dennis Kjaer Christensen
 

Re:Benchmarker

Maybe what should be considered to enhance current B&V tools to automate
statistic upload to server.
This way quite easyly could automate whole process of updating result
spreadsheets.
This would not be that difficult, to get it working at least.
Just would need to register the B&V type (like CompareMem, and give it
current version. This way all the processors would be covered very fast,
cause all could run and submit results in the net DB. Compability with even
most {*word*118} processor models would be covered. And then there could be a
tool that would automatically update the best function on each processor
platfrom (Administratior type task of cource).
Like we will need more speed on most curious processors (Cyrix, Via,
Transmeta etc...), but most commonly old Pentiums
Just to share my point of view...
-Tommi-
"Adem" <XXXX@XXXXX.COM>writes
Quote

Hi,

I am writing a benchmarking GUI type of thing to help
people decide what to use where.

A screenshot of what it looks like can be seen at
nntp://forums.borland.com/borland.public.attachments/9850
(obviously with non-sensical dummy data)

The idea is to have something that as many categories as
needed as well as the number of contestanting routines
(procedures and functions) without needing to alter the
GUI code; so that the GUI part will reusable.

I intend to let it export its data in various formats, including
those suitable for TDataset usage.

My question is: Is this a worthwhile effort --I mean, are you
guys likely to use it?

What other stuff would you want to see in it?

IOW, I need comments.

Cheers,
Adem
 

Re:Benchmarker

Quote
Maybe what should be considered to enhance current B&V tools to
automate statistic upload to server.

This way quite easyly could automate whole process of updating result
spreadsheets.

This would not be that difficult, to get it working at least.
I have been thinking this, too. Even the benchmark/validation results
that Dennis has been posting these last few days can be posted using
Indy; this would have saved him quite some work.
Quote
Just would need to register the B&V type (like CompareMem, and give
it current version. This way all the processors would be covered very
fast, cause all could run and submit results in the net DB.
Compability with even most {*word*118} processor models would be covered.
And then there could be a tool that would automatically update the
best function on each processor platfrom (Administratior type task of
cource).

Like we will need more speed on most curious processors (Cyrix, Via,
Transmeta etc...), but most commonly old Pentiums
This would be possible to build into my B&V toll, since it includes
system information anyways.
I think however the main goal for old CPUs should be validation, since
we don't optimize for them. They cannot be included into the
benchmarking since that should be standardized for each year. By
running each benchmark on different, more or less random, CPUs the
results would not be stable and reliable enough and more aimed at the
past than at the future.
--
The Fastcode Project: www.fastcodeproject.org/
 

Re:Benchmarker

Avatar Zondertau writes:
Quote
>Like we will need more speed on most curious processors (Cyrix, Via,
>Transmeta etc...), but most commonly old Pentiums

This would be possible to build into my B&V toll, since it includes
system information anyways.
What I was thinking of is to have each contesting routine to
register itself with the following information:
Catogry {what is it --Pos(), IntToStr() etc}
CPUKind
RoutineID
so that, the platform the B&V runs decides which routine
to include.
I did also include information about the author (name, email,
web) as well as license information for the routine.
Once this stuff is in there, the user (say, me) could run the
benchmark on a target machine, and pick the best one and send
the results to central database somewhere.
Cheers,
Adem
 

Re:Benchmarker

Quote
>>Like we will need more speed on most curious processors (Cyrix,
>>Via, Transmeta etc...), but most commonly old Pentiums
>
>This would be possible to build into my B&V toll, since it includes
>system information anyways.

What I was thinking of is to have each contesting routine to
register itself with the following information:

Catogry {what is it --Pos(), IntToStr() etc}
CPUKind
RoutineID

so that, the platform the B&V runs decides which routine
to include.

I did also include information about the author (name, email,
web) as well as license information for the routine.
It is not desirable that the B&V tool include multiple challenges. The
challenges (possibly consisting of multiple subchallenges) should be in
one executable.
That said my B&V does support adding (sub)challenges by simply adding a
unit. A challenge is a class which extends a class defined in the B&V.
It can be registered by one line in the initialization section.
Each of the functions have a name, a specification of the CPU type and
their author registered with them. Functions can also be added by
adding a line in the initialization section which registers them with
the appropriate (sub)challenge.
--
The Fastcode Project: www.fastcodeproject.org/