Board index » cppbuilder » Re: Standard C++ programming will be difficult under Longhorn

Re: Standard C++ programming will be difficult under Longhorn


2004-12-09 10:00:31 AM
cppbuilder50
Randall Parker < XXXX@XXXXX.COM >wrote:
Quote
Nicola Musatti wrote:
>Randall Parker wrote:
>>Why is future growth of variable sizes something we even want our VMs
>>and compilers to aggressively embrace? I see the larger sizes
>>increasingly as things to avoid.
>
>I bet people said the same thing when they started to move from 8 to 16
>bits ;-)

But there are many things that do not have 2^128 possible combinations.

Look, sometimes I need to count the items in some list and I know the list isn't
going to be real big because it is a list that exists to be viewed by humans. Getting
the count in a 128 bit variable adds no value to my program, none at all.
Well, for most lists to be read by humans,
even 8bit are much to many. So what?
Quote
[...]
Schobi
--
XXXX@XXXXX.COM is never read
I'm Schobi at suespammers dot org
"The presence of those seeking the truth is infinitely
to be prefered to those thinking they've found it."
Terry Pratchett
 
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Tamas Demjen < XXXX@XXXXX.COM >writes:
Quote
Oscar Fuentes wrote:
>I'm not so sure we can apply the rule of the ever-lasting,
>continuous-rate progress. Have you read that joke about "if the car
>industry progressed at the same rate the computer industry does..."

LOL, yeah. But seriously, transportation actually does progress at an
exponential rate, at least seemingly:
end of the 18th century: 6 mph (10 km/h)
end of the 19th century: 60 mph (100 km/h)
end of the 20th century: 600 mph (1000 km/h)
end of the 21th century: 6000 mph ???
(SpaceShipOne has already reached 2500 mph, just to show the progress)
That's a progress of 10x every 100 years. To match computer industry,
tranport speed must grow at a rate of 2^66 ~= 73*10^18 every 100
years.
But even accepting your numbers, the vehicles involved should be the
space shuttle for 6 mph on the 18th century and a donkey for 6000 mph
on the 20th century.
--
Oscar
 

Re:Re: Standard C++ programming will be difficult under Longhorn

"Hendrik Schober" < XXXX@XXXXX.COM >writes:
Quote
>>In my experience portability was very much determined by the
>>standard conformance of the c++ compiler and its optimization
>>capabilities. With this in mind, Windows with VC7.1 has very high
>>portability rank - good conformance, excellent optimization;
>>Intel compilers are also good in both respects.
>
>Now, this is a strange. So if you are a MS shop using VC++ 6.0 and
>then switch to VC++ .NET 2003, your software, suddenly, becomes more
>portable?

It was that way for us. Suddenly we could throw out all the
work-arounds that existed only for VC6 (and broke the builds on
other platforms), we could extend our C++ vocabulary immensly (and
use some std C++ libs across all platforms instead of different
proprietary ones), were able to discover bugs in other compiler
(by having VC find errornous code that used to be accepted by
compilers on other platforms) and change the code so that it is
more std conforming (and thus more future- compatible). BTW, GCC
3.X was another such breakthrough in std conformance.
Schobi,
I think you don't qualify as a MS-only shop, but I see what you mean
and I agree for the most part. However, the OP says "Windows with
VC7.1 has very high portability rank - good conformance, excellent
optimization" which smells very different of what you say. The mention
of optimization is very odd, too.
--
Oscar
 

{smallsort}

Re:Re: Standard C++ programming will be difficult under Longhorn

"Ed Mulroy [TeamB]" < XXXX@XXXXX.COM >writes:
Quote
>I know that. That's why we need the filters on the
>other phone jacks, since the data is now outside
>the voice spectrum.

The provincialism of the phone companies are what limited
modem speed. Even 14.4 exceeds what they demanded
the FCC limit modem communications to. 56K uses what
the phone line is, not what they were willing to admit
it was.
Here, in Spain, the telecoms are required to support a minimun of 9200
bps, which speaks immensenly about how they can influence the
legislative power to their benefit.
[snip]
Quote
Times have changed. Back when AT&T detected that we
were sending 1200 baud modem signals thorugh the phone
lines without using an AT&T modem (then called a Western
Union "Data Set") they called the FBI who came calling on
us. This was in spite of the fact that they were well aware of
the Carterphone decision (a court case they lost) which
required them to allow such use. Things are better these
days.
One thing the telecoms can not easily do here in Spain is to send the
police to knock on your door. On the USA it seems different. Seems
that the FBI is some kind of private police force at the service of
large corporations, no questions asked. I know first hand a case where
the poor victim was involved on a truely kafkian experience. At least,
the case was simple enough that he was able to demonstrate his total
innocence and the consequences were "only" the lost of a bunch of
personal stuff, a serious damage to the company where he worked and
several months living on a state of extreme anxiety. Some hackers on
the first nineties were not so fortunate.
--
Oscar
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Chris Uzdavinis (TeamB) wrote:
Quote
On today's machines, yes. But progress marches on. Eventually those
values will be the natural and most efficient types to store and
address.
But if internal registers are made very large and the additional size doesn't buy any
additional thru-put then what is the point? If the additional size increases silicon
acreage dedicated to more transistors then that increases distance and increased
distance means increased latency.
Quote
Memory bandwith requirements go up as the resolution and
quality of multimedia, video games, and other data-intensive
applications become more commonplace.
Latency is the problem, not bandwidth.
Quote
It wasn't long ago when computer memories were mesured in Kilobytes,
then megabytes. Now gigabytes of memory are common. It won't be that
long before we're taking terabytes, petabytes, and larger. And we'll
need larger pointers to access that memory, larger busses to thransfer
that data, larger registers to work with those values, etc.
Wait states keep going up. Processor speed has been going up faster than memory
access speeds for years. Apple II 6502 processors were so slow compared to memory
that every other main memory cycle could be dedicated to video refresh. Now we have
processor cycling that is many times faster than main memory access speeds.
Quote
Or do you have reason to believe that computers are going to stop
improving at the monstrous rates they have been? So far I don't.
Nanotechnology is opening up whole new possibilities, though years
away it shouldn't be ignored.
Yes, in fact the rate of advance has slowed greatly in the last couple of years. Heat
is the biggest problem. Intel has been abandoning design efforts on new chips left
and right. They can't get past the heat problem.
Quote
For small numbers, I agree. But if the hardware natively supports
large(er) numbers, there is no penalty to use more bits than are
necessary to store small numbers. You only need 4 bits to store
numbers from 0 to 10... but do you complain about wasting all those
extra bits in a short or in an int? Maybe, but I would doubt it.
But making registers bigger itself exacts a penalty. Transistors used for larger
registers are transistors that could have been used for something else or by their
removal stuff could have been brought closer together.
Quote
>Increasing the number of bits uses for addresses also is experiencing
>decreasing returns.

Perhaps, but only when the hardware doesn't natively support such big
numbers.
No, transistors spent on larger registers is a cost period.
Quote
>You can only access one address at a time.
Today. If this is a bottleneck, then it'll be addressed. Multi-core
CPUs and SMP motherbords are evidence of the future of parallelization
in computers.
That is being addressed with multiple busses out to memory. But all the architectural
tricks can only do so much. SMP on a chip does not solve that problem. Rather, it
creates more CPUs competing to access the same main memory.
Quote
>Having more addresses makes sense only up to some point.

Given today's applications, yes. Tomorrow's applications are
inconceivable due to the limitations of the equipment available.
No, wider address busses have a declining marginal return.
Quote
>Then you need some other mechanism involving messages sent between
>processors to divvy up work.
Yep. Though this could be done at any of several layers... from user
application to the compiler to the CPU itself, sepending on situation.
But those other methods avoid the need for large address busses.
Quote
>Also, the larger the pointers the poorer the locality of access when
>looping thru and grabbing a succession of pointers.
I don't think this is a valid claim. Memory page sizes will grow as
necessary to accomodate the other growths in hardware sizes.
The more acreage the caches require for more transistors the further all the
transistors are from each other. That increases latency.
Size can't be scaled up without scaling up latency.
Quote
Change
won't happen in a vacuum, it'll happen to all areas of the machine
pretty much simultaneously. Otherwise the feature will have to be
simulate, which is expensive (as you claim.) It won't be long before
some participant in the market sees a chance to get one-up on the
competition, and solve that problem by nativly implementing what is
simulated everywhere else. (Look at the 64-bit numbers
today... simulated on 32-bit platforms frequently, and not
coincidently, 64-bit machines are a growing market.)
Most applications do not need 64 bit address spaces. Microsoft delayed support for 64
bit for years because the demand is not that great.
Quote
I think you're viewing the future through the lens of today's
limitations and extrapolating.
I am viewing the future from the perspective of someone who has helped decide custom
CPU chip register sizes and instruction sets. I've had many discussions with CPU
designers about lots of trade-offs.
Quote
Rather, try looking beyond those
problems. In order for large-bit machines make critical mass, by
definition the problems holding them back must be solved. So by the
time they're actually here, the problems you describe will have
already been solved.
The trend toward making a single CPU faster with wider address and data busses is
hitting up against latency problems and heat problems. Fancier cache schemes are
running out of steam as ways to deal with the latency problem. Fancier cooling has
its limits and people do not want their computers using ever larger numbers of watts
for cost and portability reasons anyway.
I even see limits to the current trend toward SMP on a chip because of the
competition this creates for the address and memory busses. Better to move each CPU
closer to its own dedicated memory and have more of an ASMP architecture (asymmetric
rather than symmetric memory access). Though that makes coding harder.
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Hendrik Schober wrote:
Quote
>Look, sometimes I need to count the items in some list and I know the list isn't
>going to be real big because it is a list that exists to be viewed by humans. Getting
>the count in a 128 bit variable adds no value to my program, none at all.

Well, for most lists to be read by humans,
even 8bit are much to many. So what?
Schobi,
Imagine you have ten million records that have, say, 20 fields in them. Suppose each
of the fields is an integer and that you know each value doesn't even need 8 bits.
Well, you can declare them all as byte fields or, say, as 128 bit native integer
fields on some future 128 bit processor.
The difference between the choices is 200 megs versus 2^7 * 200 megs. Well, why make
your data 2^7 times larger? That slows down disk reads and CPU processing. The fact
that the variables load into native 128 bit registers doesn't help performance any.
It hurts performance.
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Oscar Fuentes wrote:
Quote

Please note that I was not arguing against 64 bit address spaces,
quite the contrary, I think 32 bit is too small. What I say is that 64
bit will be enough for a *long* time.
For applications that do not need to address gigabytes of memory (and there are huge
numbers of applications for which this is the case) 32 bits is not too small.
If you have a 64 bit processor and compile a 32 bit app on it then for many classes
of applications it may well be faster than if you compile a 64 bit app (compiling the
same source code). The 32 bit app is going to have smaller variables. So more stuff
will fit in L1, L2, L3 cache. Each memory cycle out to main memory will bring in more
variables. More of each data structure will fit into a cache page.
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Randall Parker < XXXX@XXXXX.COM >wrote:
Quote
[...]

Imagine you have ten million records that have, say, 20 fields in them. Suppose each
of the fields is an integer and that you know each value doesn't even need 8 bits.
Well, you can declare them all as byte fields or, say, as 128 bit native integer
fields on some future 128 bit processor.

The difference between the choices is 200 megs versus 2^7 * 200 megs. Well, why make
your data 2^7 times larger? [...]
Why even use 32bit nowadays (or 64bit next
year, FTM) for this task? If you're not
arguing against 32bit if the data doesn't
even take 8bit, what's the point in arguing
against 128bit? And if you pack this to
8bit today, why not do the same on a 128bit
platform?
Where you use 8bit for a task on a 32bit
platform, you might just as well use 8bit
for the same task on a 128bit platform.
Where you use 32bit today because that's
exactly what you need, you might just as
well use 32bit on a 128bit platform tomorrw.
And where you use 32bit today just because
'int' just seemed easier or because it is
good enough or because it isn't slower
/despite/ the fact that you actually would
only need 23bit for the task as hand, you
might just as well use 128bit on a 128bit
platform.
Again, what is your point?
Schobi
--
XXXX@XXXXX.COM is never read
I'm Schobi at suespammers dot org
"The presence of those seeking the truth is infinitely
to be prefered to those thinking they've found it."
Terry Pratchett
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Randall Parker < XXXX@XXXXX.COM >wrote:
Quote
Oscar Fuentes wrote:
>
>Please note that I was not arguing against 64 bit address spaces,
>quite the contrary, I think 32 bit is too small. What I say is that 64
>bit will be enough for a *long* time.

For applications that do not need to address gigabytes of memory (and there are huge
numbers of applications for which this is the case) 32 bits is not too small.
There are huge numbers of applications that
would need a lot less than 2GB of space --
still they are done in 32bit. So what?
Quote
If you have a 64 bit processor and compile a 32 bit app on it then for many classes
of applications it may well be faster than if you compile a 64 bit app (compiling the
same source code). The 32 bit app is going to have smaller variables. So more stuff
will fit in L1, L2, L3 cache. Each memory cycle out to main memory will bring in more
variables. More of each data structure will fit into a cache page.
I'd rather defer optimization issues until
exact measurements have been done. Then if
some app that only needs 32bit indeed turns
out to run faster as a 32bit app even on a
64bit platform, then it should of course be
compiled as a 32bit app.
I don't think anyone here would dispute that.
Schobi
--
XXXX@XXXXX.COM is never read
I'm Schobi at suespammers dot org
"The presence of those seeking the truth is infinitely
to be prefered to those thinking they've found it."
Terry Pratchett
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Oscar Fuentes < XXXX@XXXXX.COM >wrote:
Quote
[...]

But even accepting your numbers, the vehicles involved should be the
space shuttle for 6 mph on the 18th century and a donkey for 6000 mph
on the 20th century.
OTOH, a computer from 50 years ago
compares to what I have on my desk
as a donkey to a space shuttle. :o>
Schobi
--
XXXX@XXXXX.COM is never read
I'm Schobi at suespammers dot org
"The presence of those seeking the truth is infinitely
to be prefered to those thinking they've found it."
Terry Pratchett
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Oscar Fuentes < XXXX@XXXXX.COM >wrote:
Quote
[...]

I think you don't qualify as a MS-only shop [...]
Not really. In fact, not even the shop
I work for does. :o>
(You weren't nitpicking in any way, were
you? <g>)
Schobi
--
XXXX@XXXXX.COM is never read
I'm Schobi at suespammers dot org
"The presence of those seeking the truth is infinitely
to be prefered to those thinking they've found it."
Terry Pratchett
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Randall Parker < XXXX@XXXXX.COM >writes:
Quote
Chris Uzdavinis (TeamB) wrote:
>On today's machines, yes. But progress marches on. Eventually those
>values will be the natural and most efficient types to store and
>address.

But if internal registers are made very large and the additional
size doesn't buy any additional thru-put then what is the point? If
the additional size increases silicon acreage dedicated to more
transistors then that increases distance and increased distance
means increased latency.
While I like a life of simplicity, and find beauty in small, efficient
programs, it's still undenyable that the trend of software is getting
bigger. If not the applications themselves, then the data sets that
they work on.
I wouldn't be suprised in the least that in 20 years a single movie
takes terabytes of data. The sound encoding gets more complicated for
the different channels, and possibly for multiple languages, alternate
storylines, viewer-selectable customizations for movie elements, etc.
That's just the trend, and it probably won't slow down.
Maybe we are reaching the upper-bounds of what silicon can handle.
I've read some reports about home-grown diamonds which are basically
impervious to heat, and once they become economical to produce would
be a vast improvement over silicon. Other technologies may also work
around the limitations of what we have today.
Be careful about being too skeptical... think of all the (now)
humorous quotes by "visionaries" from the past who estimated what our
needs were based on what was currently available.
But ultimately, the push to progress to the machines I am envisioning
won't happen until there is a perceived need.
--
Chris (TeamB);
 

Re:Re: Standard C++ programming will be difficult under Longhorn

"Hendrik Schober" < XXXX@XXXXX.COM >wrote in message news: XXXX@XXXXX.COM ...
Quote
Randall Parker < XXXX@XXXXX.COM >wrote:

[....]

>If you have a 64 bit processor and compile a 32 bit app on it then for many classes
>of applications it may well be faster than if you compile a 64 bit app (compiling the
>same source code). The 32 bit app is going to have smaller variables. So more stuff
>will fit in L1, L2, L3 cache. Each memory cycle out to main memory will bring in more
>variables. More of each data structure will fit into a cache page.


I'd rather defer optimization issues until
exact measurements have been done. Then if
some app that only needs 32bit indeed turns
out to run faster as a 32bit app even on a
64bit platform, then it should of course be
compiled as a 32bit app.
I don't think anyone here would dispute that.

Schobi
I agree with you Schobi.
Regarding the performance i doubt that the 64 bit applications will be always slower.
E.g. the 64 bit processor has more registers, so it can pass more function parameters in
registers, than pushing most of them on the stack.
Andre
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Hendrik Schober wrote:
Quote
Why even use 32bit nowadays (or 64bit next
year, FTM) for this task? If you're not
arguing against 32bit if the data doesn't
even take 8bit, what's the point in arguing
against 128bit?
Well, first of all 128 makes the problem 4 times worse than 32 bit and the latency
problem will grow in the meantime.
Howver, I already do declare smaller fields in in structs or classes when I know I'm
going to have a lot of something.
 

Re:Re: Standard C++ programming will be difficult under Longhorn

Chris Uzdavinis (TeamB) wrote:
Quote
While I like a life of simplicity, and find beauty in small, efficient
programs, it's still undenyable that the trend of software is getting
bigger. If not the applications themselves, then the data sets that
they work on.
Those big datasets especially need data types no larger than they need to be.
Quote
Maybe we are reaching the upper-bounds of what silicon can handle.
I've read some reports about home-grown diamonds which are basically
impervious to heat, and once they become economical to produce would
be a vast improvement over silicon. Other technologies may also work
around the limitations of what we have today.
I read some quote of a guy from Intel who said a year or so ago that if processor
power needs and heat levels kept on rising at the historical rate then in 10 or 20
years (I forget when) the processors would become as hot as the surface of the sun.
Obviously, there is a practical limit to how hot things can get.