"Jonathan Benedicto" <
XXXX@XXXXX.COM >writes:
Quote
I seem to recall reading somewhere that using while instead of for,
and ++i instead of i++ made a faster loop because it allowed the
compiler to optimize better.
Most compilers implement their parser in a way that has an internal
representation of the program in some sort of tree-like structure. At
this level, individual language constructs like "while" and "for"
generally get represented identically, and when this tree is traversed
during the code-gen phase, the same assembly is generated.
I'd be quite surprised to find any compiler that had measurable
differences between while and for loops. (Though I have not
benchmarked Borland specifically and don't know this for certain.)
There is some truth to preferring the prefix rather than postfix
increment/decrement, which you don't need the return value, however,
on a builtin or numeric type, it is usually a difference of zero. The
habit is good to have when you actually are dealing with user-defined
types, such as some iterators, because those must internally implement
the postfix operators inefficiently compard to the prefix version
(since copies and tempories are required.) For an integer, it doesn't
matter.
That said, a speedup of ++i over i++ is O(1), while the overall
complexity of the problem is O(n*m), which is actually, since BigOh
notation deals with worst-cases, is actually O(n^2).
In an O(n^2) problem, a miniscule constant speedup tends to be
insignificant, assuming N is large enough. The graph (in measuring
the time it takes) of N^2 starts nearly horizontal but eventually
bends until it's approaching vertical. A constant speedup merely
shifts this entire graph, but an algorithmic improvement can flatten
it. For large N, an unoptimized implementation of a fast algorithm
will be light-years faster than a highly-optimized implementation of
the inefficient algorithm.
Thus, focusing on the algorithm is where you tend to get the most
"bang for the buck" in terms of optimizations.
--
Chris (TeamB);