Chris Uzdavinis (TeamB) wrote:
Quote
On today's machines, yes. But progress marches on. Eventually those
values will be the natural and most efficient types to store and
address.
But if internal registers are made very large and the additional size doesn't buy any
additional thru-put then what is the point? If the additional size increases silicon
acreage dedicated to more transistors then that increases distance and increased
distance means increased latency.
Quote
Memory bandwith requirements go up as the resolution and
quality of multimedia, video games, and other data-intensive
applications become more commonplace.
Latency is the problem, not bandwidth.
Quote
It wasn't long ago when computer memories were mesured in Kilobytes,
then megabytes. Now gigabytes of memory are common. It won't be that
long before we're taking terabytes, petabytes, and larger. And we'll
need larger pointers to access that memory, larger busses to thransfer
that data, larger registers to work with those values, etc.
Wait states keep going up. Processor speed has been going up faster than memory
access speeds for years. Apple II 6502 processors were so slow compared to memory
that every other main memory cycle could be dedicated to video refresh. Now we have
processor cycling that is many times faster than main memory access speeds.
Quote
Or do you have reason to believe that computers are going to stop
improving at the monstrous rates they have been? So far I don't.
Nanotechnology is opening up whole new possibilities, though years
away it shouldn't be ignored.
Yes, in fact the rate of advance has slowed greatly in the last couple of years. Heat
is the biggest problem. Intel has been abandoning design efforts on new chips left
and right. They can't get past the heat problem.
Quote
For small numbers, I agree. But if the hardware natively supports
large(er) numbers, there is no penalty to use more bits than are
necessary to store small numbers. You only need 4 bits to store
numbers from 0 to 10... but do you complain about wasting all those
extra bits in a short or in an int? Maybe, but I would doubt it.
But making registers bigger itself exacts a penalty. Transistors used for larger
registers are transistors that could have been used for something else or by their
removal stuff could have been brought closer together.
Quote
>Increasing the number of bits uses for addresses also is experiencing
>decreasing returns.
Perhaps, but only when the hardware doesn't natively support such big
numbers.
No, transistors spent on larger registers is a cost period.
Quote
>You can only access one address at a time.
Today. If this is a bottleneck, then it'll be addressed. Multi-core
CPUs and SMP motherbords are evidence of the future of parallelization
in computers.
That is being addressed with multiple busses out to memory. But all the architectural
tricks can only do so much. SMP on a chip does not solve that problem. Rather, it
creates more CPUs competing to access the same main memory.
Quote
>Having more addresses makes sense only up to some point.
Given today's applications, yes. Tomorrow's applications are
inconceivable due to the limitations of the equipment available.
No, wider address busses have a declining marginal return.
Quote
>Then you need some other mechanism involving messages sent between
>processors to divvy up work.
Yep. Though this could be done at any of several layers... from user
application to the compiler to the CPU itself, sepending on situation.
But those other methods avoid the need for large address busses.
Quote
>Also, the larger the pointers the poorer the locality of access when
>looping thru and grabbing a succession of pointers.
I don't think this is a valid claim. Memory page sizes will grow as
necessary to accomodate the other growths in hardware sizes.
The more acreage the caches require for more transistors the further all the
transistors are from each other. That increases latency.
Size can't be scaled up without scaling up latency.
Quote
Change
won't happen in a vacuum, it'll happen to all areas of the machine
pretty much simultaneously. Otherwise the feature will have to be
simulate, which is expensive (as you claim.) It won't be long before
some participant in the market sees a chance to get one-up on the
competition, and solve that problem by nativly implementing what is
simulated everywhere else. (Look at the 64-bit numbers
today... simulated on 32-bit platforms frequently, and not
coincidently, 64-bit machines are a growing market.)
Most applications do not need 64 bit address spaces. Microsoft delayed support for 64
bit for years because the demand is not that great.
Quote
I think you're viewing the future through the lens of today's
limitations and extrapolating.
I am viewing the future from the perspective of someone who has helped decide custom
CPU chip register sizes and instruction sets. I've had many discussions with CPU
designers about lots of trade-offs.
Quote
Rather, try looking beyond those
problems. In order for large-bit machines make critical mass, by
definition the problems holding them back must be solved. So by the
time they're actually here, the problems you describe will have
already been solved.
The trend toward making a single CPU faster with wider address and data busses is
hitting up against latency problems and heat problems. Fancier cache schemes are
running out of steam as ways to deal with the latency problem. Fancier cooling has
its limits and people do not want their computers using ever larger numbers of watts
for cost and portability reasons anyway.
I even see limits to the current trend toward SMP on a chip because of the
competition this creates for the address and memory busses. Better to move each CPU
closer to its own dedicated memory and have more of an ASMP architecture (asymmetric
rather than symmetric memory access). Though that makes coding harder.