Board index » delphi » Summary of (dis)agreements on latewst FastCode developments

Summary of (dis)agreements on latewst FastCode developments


2005-06-15 01:50:26 PM
delphi183
To discuss list for next release. Some more may be added later; still at least 14 posts unread :(
Original points have been marked with '>' for reading ease, though they may not always be literal quotes.
I'm afraid that stuff which i say here is "to do" cannot be done by me soon. I can do it later or any
volunteer can change it before; the B&V source code shouldn't be hard to modify.
Quote
Version numbering without letters
I can live with this
Quote
Caption is inconsistent with our naming conventions.
Please be more specific. Is it the caption of the form or the tab? What do you think it
should have been?
Quote
One B&V for one challenge is our norm.
Where is the "Benchmark Validate" functionality?
I thought (and still think) that IntToStr and IntToStr64 relate to one-another like
the single/double/extended overloads in some other benchmarks. Perhaps some global
"benchmark all", "validate all" and "report all" features would be nicer though.
Quote
How are benchmark rerun results show? Older B&V's list all results and have
a button to Clean up by deleting all by the fastest runs of each function.
Currently old results are replaced by new. I can change this. How do you think it
should be? Minimum in the list on the main form and all runs in the report?
Quote
I would like to see all subbench results in main report as normal.
They are there in the report. You can display them on the main form by right-clicking
a column header and selecting "subbenchmarks".
Quote
How do I save results to a .csv?
There is no such feature yet, because appearently i didn't notice it in your
original B&V. I can add it.
Quote
I miss a close button on main form.
I don't miss it; i always use Alt-F4 or the upper-right-corner cross. it is trivial to
add one though if you really like it.
Keep in mind that i originally released this new B&V tool more as a suggestion
than as a final solution; of course I am open for modification suggestions, like
i've also changed stuff others requested.
Quote
Include 4 versions of functions with different alignment
Like you said yourself somewhere: this should be done in a stable release, not
now yet.
Quote
NO Delphi 7 is baseline. We voted for this not long ago. I have repeated it in
here 27 times today. Please respect all the Fastcode rules and conventions.
Is currently being voted on; it wouldn't have much effect on the B&V tool anyway.
I'll try to maintain Delphi 5 compatibility wherever it isn't harmful to other
versions.
Quote
We normally sort the results so that the winner appears at the top.
On the main form results can be sorted on any column, in both directions. ISTM
this is nicer than just sorting on total benchmark score always.
I agree that the report should also be sorted, and consider it part of the
"to do" list now.
Quote
>(post about adding alogrithm to description)

We have function naming conventions.
I will not help others to spy on my super function :-)
The naming conventions are still obeyed, an algorithm is simply appended to the
names displayed on the B&V. I don't think looking at algorithms used is spying
at all; it is certainly different than copying. In fact i try explicitly to avoid
using algorithms used by others, but i think looking at them can teach you how to
fastcode better and to think of new, better, algorithms.
Quote
All functions released by Fastcode must pass all validations. We do not want
bug reports on known issues.
Running the large validations is not feasable for now, while stuff still changes.
It simply takes too much time. IMHO it should be done when a stable B&V is released.
Quote
There are no problems in supporting D5 ! ;-)
Forget about D5. Go download a D2005 trial and use that for Fastcode work.
I still think we should allow people to use their own Delphi version whenever it's
not too much trouble. In this case it isn't and i think supporting Delphi 5 in
this case will make FastCode accessible for more people.
Quote
No problems with having D5 support!

Should we D4 also? Then we only need to run validation on D4,D5,D6,D7 and
D2005 on 6 PC's this gives you 30 complete validation runs.
Making an effort to support Delphi 5 is not the same thing as officially supporting
it. Even without official support many D5 users will be happy if the FastCode stuff
compiles and runs.
Quote
>Validation is aborted after 256 failures, to prevent validation from
>filling up the log, taking forever

Exit after 1 error as always.
I think results are more useful if you see multiple log lines reporting failures.
This may allow the writer of the failing function to see what values are failing and
is therefore more useful then getting only one result.
An example on this: many IntToStr64 functions originally fail on $8000000000000000,
which is the first number tested. it is good to know that this is actually the only
failure, because otherwise womething may be more fundamentally wrong. Thus displaying
mulpile validation results may make debugging easier.
Quote
>I'm considering perhaps copying the source code for the RTL IntToStr
>into the project. This would solve the problem of the overloading and
>it would also make sure that the RTL rating is the same for all
>compilers.

Yes. I sometimes do that also to get all 4 kinds of alignment for RTL code.
(But we are not allowed to due to license issues?)
I was hoping someone would tell me, because I am not sure. Replies on this point are
very welcome, especially if they would come from Borland.
Quote
Normally I use the fastest function to set the subbenchmark weigths. I think
we should continue doing that. Also I normally do it on a P4 which is just
as incorrect as doing it on a Dothan. We should use blended results as is
done in Move.
I wasn't sure of the way you did it and didn't have a P4 to test them, so i just
did what seemed right. I guess this is another thing that should be done once the
stable release is released.
Quote
I have made an errortrap in some of my B&V's that can be turned on, but it
is a good idea to have it in all B&V's. Perhaps the new way is the best one.
I have not looked at it yet.
You should, it is really neat ;-)
I'm using a memo for logging all B&V actions, including validation failures with
information on what went wrong (result or exception) and when it went wrong (inputs).
Quote
>Even the solution used by Dennis, copy each function fourfold, is not
>completely satisfactory, because you can not be sure you're getting all
>alignments.

You can. Look at the benchmark results where I list alignment. And look at
the B&V's to learn the technique.
Ok, i hadn't looked into it closely, so i hadn't seen the NOPs. Another stable
release thing i guess.
Quote
>>(suggestion from JOH to include dummy function and
>
>That sounds like a very good plan. It will definately result in fairer
>comparison.

No please do not.
I think you should mostly work this out with JOH, i can live with it either way.
Once you've reached agreement or (if you can't) a poll has been done it can be
changed back if needed.
Quote
Include newest modified JOH function
(just a reminder for myself)
 
 

Re:Summary of (dis)agreements on latewst FastCode developments

It turned out the messages left unread were all about call overhead measurement.
I'd like to make a suggestion for a compromise here which i hope satisfies both
parties, unlike the result of a poll, which is likely to disappoint one of the
parties involved.
I suggest we include a conditional define in the application which can enable
dummy function checking, while the default setting in the released B&V tools
has this undefined by default.
This way the application will measure the numbers Dennis wants to show on
FastCode webpages and spreadsheets by default, but JOH and others don't have
to go through any extra trouble to measure results without call overhead. The
result report can display the fact that this define was enabled, so that we
won't confuse the different measurements.
Something like this:
{$UNDEF CompensateCallOverhead}
Default; makes B&V behave as others, just showing the direct measurements.
{$DEFINE CompensateCallOverhead}
Makes B&V subtract call overhead from results, allowing optimizers to see
speed differences more accurately. Adds warning line to result reports
generated.
 

Re:Summary of (dis)agreements on latewst FastCode developments

Hi Avatar
Quote
It turned out the messages left unread were all about call overhead
measurement.
I'd like to make a suggestion for a compromise here which i hope satisfies
both
parties, unlike the result of a poll, which is likely to disappoint one of
the
parties involved.
A poll can include the compromise too ;-)
Quote
I suggest we include a conditional define in the application which can
enable
dummy function checking, while the default setting in the released B&V
tools
has this undefined by default.
Not bad.
Quote
This way the application will measure the numbers Dennis wants to show on
FastCode webpages and spreadsheets by default, but JOH and others don't
have
to go through any extra trouble to measure results without call overhead.
The
result report can display the fact that this define was enabled, so that
we
won't confuse the different measurements.
Reasonable
Quote
Something like this:

{$UNDEF CompensateCallOverhead}
Default; makes B&V behave as others, just showing the direct
measurements.

{$DEFINE CompensateCallOverhead}
Makes B&V subtract call overhead from results, allowing optimizers to
see
speed differences more accurately. Adds warning line to result reports
generated.
I think this is a reasonable solution.
The first time anyone claims that he improved the speed of a function with
x% based on a measurement without function calling overhead included, I will
restart the discussion and ask for removal of the feature ;-)
"See speed differences more accurately"
False claim! It will not do this! Remember my calculations. It will just
make the improvements look bigger.
This is important to John because he will abandon optimizations that are
"too" small eg 1% relative to speed of function body.
It is not important to me because I keep all optimizations that gives a
measureable speed improvement. Observe that 8 out of 9 targets do not have a
penalty for size and the fastest function wins. Therefore it does not make
sense to rule out "small" optimizations. In the last target you have you
compare the speed increase to the code size increase. Does the feature help
you do this?
Regards
Dennis
 

Re:Summary of (dis)agreements on latewst FastCode developments

Hi Avatar
I only comment if I disagree and you will see that I mostly agree ;-)
Quote
I'm afraid that stuff which i say here is "to do" cannot be done by me
soon. I can do it later or any
volunteer can change it before; the B&V source code shouldn't be hard to
modify.
It is ok that it takes some time.
Quote
>Caption is inconsistent with our naming conventions.

Please be more specific. Is it the caption of the form or the tab? What do
you think it
should have been?
Mainform caption = IntToStr Benchmark & Validation Tool for Fastcode Version
y.x
Quote
I thought (and still think) that IntToStr and IntToStr64 relate to
one-another like
the single/double/extended overloads in some other benchmarks.
OK.
Quote
Perhaps some global
"benchmark all", "validate all" and "report all" features would be nicer
though.

>How are benchmark rerun results show? Older B&V's list all results and
have
>a button to Clean up by deleting all by the fastest runs of each
function.

Currently old results are replaced by new. I can change this. How do you
think it
should be? Minimum in the list on the main form and all runs in the
report?
Or all runs everywhere?
Quote
>I would like to see all subbench results in main report as normal.

They are there in the report. You can display them on the main form by
right-clicking
a column header and selecting "subbenchmarks".
I mean in the main report window on the mainform as in the old days.
Quote
>How do I save results to a .csv?

There is no such feature yet, because appearently i didn't notice it in
your
original B&V. I can add it.
Very important feature.
Quote
>I miss a close button on main form.

I don't miss it; i always use Alt-F4 or the upper-right-corner cross. It's
trivial to
add one though if you really like it.
Please do - it will help me and hurt nobody.
Quote
Keep in mind that i originally released this new B&V tool more as a
suggestion
than as a final solution; of course I am open for modification suggestions,
like
i've also changed stuff others requested.
Very good.
Quote
>Include 4 versions of functions with different alignment

Like you said yourself somewhere: this should be done in a stable release,
not
now yet.
Do it in the last release before 1.0.
Quote
>NO Delphi 7 is baseline. We voted for this not long ago. I have repeated it in
>here 27 times today. Please respect all the Fastcode rules and
conventions.
Written before we started voting and therefore invalid.
Quote
Is currently being voted on; it wouldn't have much effect on the B&V tool
anyway.
I'll try to maintain Delphi 5 compatibility wherever it isn't harmful to
other
versions.
Are all the changes completely invisible when compiling/running under
D6-D2005?
Quote
>We normally sort the results so that the winner appears at the top.

On the main form results can be sorted on any column, in both directions.
ISTM
this is nicer than just sorting on total benchmark score always.
Yes, but sort after benchmark with the winner at the top as default.
Quote
I agree that the report should also be sorted, and consider it part of the
"to do" list now.
OK
Quote
>>(post about adding alogrithm to description)
>
>We have function naming conventions.
>I will not help others to spy on my super function :-)

The naming conventions are still obeyed, an algorithm is simply appended
to the
names displayed on the B&V. I don't think looking at algorithms used is
spying
at all; it is certainly different than copying. In fact i try explicitly to
avoid
using algorithms used by others, but i think looking at them can teach you
how to
fastcode better and to think of new, better, algorithms.

>All functions released by Fastcode must pass all validations. We do not
want
>bug reports on known issues.

Running the large validations is not feasable for now, while stuff still
changes.
It simply takes too much time. IMHO it should be done when a stable B&V is
released.
But remove functions that fail.
Quote
Making an effort to support Delphi 5 is not the same thing as officially
supporting
it. Even without official support many D5 users will be happy if the
FastCode stuff
compiles and runs.
OK
Quote
>>Validation is aborted after 256 failures, to prevent validation from
>>filling up the log, taking forever
>
>Exit after 1 error as always.

I think results are more useful if you see multiple log lines reporting
failures.
This may allow the writer of the failing function to see what values are
failing and
is therefore more useful then getting only one result.

An example on this: many IntToStr64 functions originally fail on
$8000000000000000,
which is the first number tested. it is good to know that this is actually
the only
failure, because otherwise womething may be more fundamentally wrong. Thus
displaying
mulpile validation results may make debugging easier.
Ok A matter of taste. I just fix bugs in the order they are caugth by the
validation. They all have to be fixed anyway. My errortrap gives the
possibility of putting a breakpoint in it and step into the failing function
with the parameters that made it fail.
Quote
>>I'm considering perhaps copying the source code for the RTL IntToStr
>>into the project. This would solve the problem of the overloading and
>>it would also make sure that the RTL rating is the same for all
>>compilers.
>
>Yes. I sometimes do that also to get all 4 kinds of alignment for RTL
code.
>(But we are not allowed to due to license issues?)

I was hoping someone would tell me, because I am not sure. Replies on this
point are
very welcome, especially if they would come from Borland.
Just do it but tell nobody ;-)
Quote
>Normally I use the fastest function to set the subbenchmark weigths. I
think
>we should continue doing that. Also I normally do it on a P4 which is
just
>as incorrect as doing it on a Dothan. We should use blended results as
is
>done in Move.

I wasn't sure of the way you did it and didn't have a P4 to test them, so
i just
did what seemed right. I guess this is another thing that should be done
once the
stable release is released.
They must be correct in the 1.0 release. We must do a complete set of
benchmark runs on all targets to get it rigth.
Regards
Dennis
 

Re:Summary of (dis)agreements on latewst FastCode developments

This seems to be an effecitve way of settling differences; the list of
disagreements has been greatly reduced.
Quote
>>Caption is inconsistent with our naming conventions.
>
>Please be more specific. Is it the caption of the form or the tab?
>What do
you think it
>should have been?

Mainform caption = IntToStr Benchmark & Validation Tool for Fastcode
Version y.x
Currently the form caption says "FastCode Benchmark and Validation
Utility" and the tabs say "<<(sub)challenge name>(v<version>)". ISTM
all the information is in there.
Quote
>Currently old results are replaced by new. I can change this. How
>do you think it
>should be? Minimum in the list on the main form and all runs in the
>report?

Or all runs everywhere?
Ok, fine with me too.
Quote
>>I would like to see all subbench results in main report as normal.
>
>They are there in the report. You can display them on the main form
>by right-clicking a column header and selecting "subbenchmarks".

I mean in the main report window on the mainform as in the old days.
The listview on the main form is currently that main report window,
replacing the richedit that was used before. It can display the
subbench results by enabling their columns, though by default they are
hidden. IMHO this view provides most clarity, while it is still possible
to get the complete view.
You really think subbenchmark column ought to be shown by default?
Quote
>>How do I save results to a .csv?
>
>There is no such feature yet, because appearently i didn't notice
>it in your original B&V. I can add it.

Very important feature.
Ok, i will look into it (when i have time) and try to make a CSV export
function compatible with the old one.
Quote
>>I miss a close button on main form.
>
>I don't miss it; i always use Alt-F4 or the upper-right-corner
>cross. it is trivial to add one though if you really like it.

Please do - it will help me and hurt nobody.
Ok.
Quote
>>Include 4 versions of functions with different alignment
>
>Like you said yourself somewhere: this should be done in a stable
>release, not now yet.

Do it in the last release before 1.0.
Ok
Quote
>I'll try to maintain Delphi 5 compatibility wherever it isn't
>harmful to other versions.

Are all the changes completely invisible when compiling/running under
D6-D2005?
Depends on how invisible you want the to be. From the compiler POV the
only difference is that CPUID and RDTSC have now been coded using DB
statements (but with comments indicating what the opcode is).
From the programmer POV there are also some conditional compilation
directive, but they are few and IMHO do not harm readability.
Quote
>>We normally sort the results so that the winner appears at the
>>top.
>
>On the main form results can be sorted on any column, in both
>directions. ISTM this is nicer than just sorting on total benchmark
>score always.

Yes, but sort after benchmark with the winner at the top as default.
It can be made default.
Quote
>>All functions released by Fastcode must pass all validations. We
>>do not want bug reports on known issues.
>
>Running the large validations is not feasable for now, while stuff
>still changes. It simply takes too much time. IMHO it should be
done
>when a stable B&V is released.

But remove functions that fail.
AFAIK the B&V currently has no functions that fail simple validation.
Failers on the thorough validation will be removed from the list (and
their authors notified) once that test is done.
Quote
>I think results are more useful if you see multiple log lines
>reporting failures. This may allow the writer of the failing
>function to see what values are failing and
>is therefore more useful then getting only one result.

Ok A matter of taste. I just fix bugs in the order they are caugth by
the validation. They all have to be fixed anyway. My errortrap gives
the possibility of putting a breakpoint in it and step into the
failing function with the parameters that made it fail.
Ok, i haven't seen it yet. It might be a nice addition.
Quote
>>Yes. I sometimes do that also to get all 4 kinds of alignment for
>>RTL code.
>>(But we are not allowed to due to license issues?)
>
>I was hoping someone would tell me, because I am not sure. Replies
>on this point are very welcome, especially if they would come from
>Borland.

Just do it but tell nobody ;-)
LOL.
That's no solution however. It seems reasonable that Borland allow us
to do that, especially because they benefit from the FastCode project
themselves too. Perhaps you could send an e-mail to the guys at Borland
you know? They may be cooperative and then at least we'll know for sure
what's permitted and what's not.
Quote
>>Normally I use the fastest function to set the subbenchmark
>>weigths. I think we should continue doing that. Also I normally
do
>>it on a P4 which is just as incorrect as doing it on a Dothan. We
>>should use blended results as is done in Move.
>
>I wasn't sure of the way you did it and didn't have a P4 to test
>them, so i just did what seemed right. I guess this is another
thing
>that should be done once the stable release is released.

They must be correct in the 1.0 release. We must do a complete set of
benchmark runs on all targets to get it rigth.
Yes.
Do you think we should keep the current scales until then, or is it
neccesairy to already calibrate the scales so that they are based on
the fastest function (be it on only one CPU) instead of the RTL
function?
 

Re:Summary of (dis)agreements on latewst FastCode developments

Hi Avatar
Quote
This seems to be an effecitve way of settling differences; the list of
disagreements has been greatly reduced.
Yes ;-)
Quote
>>>Caption is inconsistent with our naming conventions.
>>
>>Please be more specific. Is it the caption of the form or the tab?
>>What do
>you think it
>>should have been?
>
>Mainform caption = IntToStr Benchmark & Validation Tool for Fastcode
>Version y.x

Currently the form caption says "FastCode Benchmark and Validation
Utility" and the tabs say "<<(sub)challenge name>(v<version>)". ISTM
all the information is in there.
Yes but the mainform caption is wrong anyway.
Regards
Dennis
 

Re:Summary of (dis)agreements on latewst FastCode developments

Hi Again
Quote
You really think subbenchmark column ought to be shown by default?
Perhaps not - is it easy to turn on/off? I should play a little with it.
What do others think?
Quote
Ok, i will look into it (when i have time) and try to make a CSV export
function compatible with the old one.
Fine - it is really needed
Quote
>>I'll try to maintain Delphi 5 compatibility wherever it isn't
>>harmful to other versions.
>
>Are all the changes completely invisible when compiling/running under
>D6-D2005?

Depends on how invisible you want the to be. From the compiler POV the
only difference is that CPUID and RDTSC have now been coded using DB
statements (but with comments indicating what the opcode is).

From the programmer POV there are also some conditional compilation
directive, but they are few and IMHO do not harm readability.
OK
Quote
AFAIK the B&V currently has no functions that fail simple validation.
Failers on the thorough validation will be removed from the list (and
their authors notified) once that test is done.
Fine.
Quote
Do you think we should keep the current scales until then, or is it
neccesairy to already calibrate the scales so that they are based on
the fastest function (be it on only one CPU) instead of the RTL
function?
Make a few adjustments from time to time such that they converge to the real
solution, but do not spend to much time on it. Just use your Dothan for a
start.
Regards
Dennis