Board index » delphi » Re: Unwelcome advice

Re: Unwelcome advice


2008-07-08 11:06:56 PM
delphi230
On 2008-07-08, Eric Grange <XXXX@XXXXX.COM>writes:
Quote
Of course CodeGear & Delphi will be busy enough catching up with Unicode
or 64bit support for the next couple of releases... but any comments on
what could be in store for Delphi to go beyond multi-threading?
Let's start with the basics. How do you imagine such support beyond the
"multi core support" not even oneliner ?
What granularity will it use, is it meant to multithread a GUI-db system, do
a bit of SETI calculation or have complex intwined consumer-producer
relations?
What do you currently do that one core can not hack that can be divided on to
multiple cores ? And can you see a generic system doing that division?
 
 

Re: Unwelcome advice

Can start with providing a multi-core aproach that is at least as easy
to use as pointers!.
I develop from 10+ years now and can do pointers easily... but never
thread development.
If wanna a example on this, look at erlang (maybe not the ultimate
solution, but for sure is incredible to know that in less than 10 lines
of code you can have a multi-core, distributed across the globe solution)
 

Re: Unwelcome advice

Quote
Let's start with the basics. How do you imagine such support beyond the
"multi core support" not even oneliner ?
There are many possible directions, from integrating CUDA-like ideas,
providing granularity specifiers in the language, going for erlang-like
constructs, introducing and extending on parallelization constructs,
libraries and concepts from the HPC world, tec... or something entirely
new/different.
Quote
What granularity will it use, is it meant to multithread a GUI-db system, do
a bit of SETI calculation or have complex intwined consumer-producer
relations?
If you aim for hundreds of cores, current "multithreaded" strategies
become somewhat of an irrelevant concept, as that'll take you only up to
a few cores... with a lot of work.
Ideally granularity should be compiler-decided, with the help of
language constructs that would make coarse granularity harder to achieve
than fine grained granularity (currently, it is the opposite).
Typically this would mean introducing non-predictability, the current
"for each" construct f.i. is linear (there is a predictable iterator
order), a parallelizable "for each" would have to relax that construct
(and move away from the underlying iterator pattern).
Quote
What do you currently do that one core can not hack that can be divided on to
multiple cores ? And can you see a generic system doing that division?
If I knew how to do such a generic system, I wouldn't tell you, I would
sell it to the highest bidder and become {*word*127} rich ;)
The whole point is that we'll be increasingly faced with CPUs comprised
of zillions of core, with a high probability that each individual core
will offer a lower performance than today's core (like Intel's latest
atom processor does).
Eric
 

Re: Unwelcome advice

Eric Grange writes:
Quote
"developers should start thinking about tens, hundreds, and thousands of
cores now in their algorithmic development and deployment pipeline"

blogs.intel.com/research/2008/06/unwelcome_advice.php

Massive multicore already exists today in the form of stream processors
(GPUs), with budding languages to exploit them like CUDA

www.nvidia.com/object/cuda_home.html#

Of course CodeGear & Delphi will be busy enough catching up with Unicode
or 64bit support for the next couple of releases... but any comments on
what could be in store for Delphi to go beyond multi-threading?

Eric
I guess that some companies are working {*word*156} multi-core support --
with good reason.
synopsys.mediaroom.com/index.php
synopsys.mediaroom.com/index.php
FWIW to you...
Hopefully we will see 64Bit and multi core capability before too long.
--
Will R
PMC Consulting
 

Re: Unwelcome advice

Quote
What do you currently do that one core can not hack that can be divided on
to
multiple cores ?
Any programming other than that for the desktop. Which is why if you're
doing this type of programming currently, you're not using Delphi or FPC.
Quote
And can you see a generic system doing that division?
Of course not, but with the capability there, you will write your code
differently anyway.
James
 

Re: Unwelcome advice

James K Smith writes:
Quote
Any programming other than that for the desktop. Which is why if
you're doing this type of programming currently, you're not using
Delphi or FPC.
Delphi (and presumably FPC) works just fine for server applications.
--
Regards,
Bruce McGee
Glooscap Software
 

Re: Unwelcome advice

Eric,
Massively parallel operations are the key to decent use of
massive multicore machines.
IMHO, you start with the problem and determine how to solve
it. SOME problems will benefit significantly from massively
parallel operation, others not at all.
So, is there a problem you're trying to solve with Delphi
that requires utilization of massively parallel operations?
Google/Yahoo/etc servers come to mind.
Intel needs to worry about it. Google and Yahoo need to
worry about it. I hope Codegear keeps their ear to the
ground with respect to what customers are doing but doesn't
get too distracted with it.
Dan
"Eric Grange" <XXXX@XXXXX.COM>writes
Quote
"developers should start thinking about tens, hundreds,
and thousands of cores now in their algorithmic
development and deployment pipeline"

blogs.intel.com/research/2008/06/unwelcome_advice.php

Massive multicore already exists today in the form of
stream processors (GPUs), with budding languages to
exploit them like CUDA

www.nvidia.com/object/cuda_home.html#

Of course CodeGear & Delphi will be busy enough catching
up with Unicode or 64bit support for the next couple of
releases... but any comments on what could be in store for
Delphi to go beyond multi-threading?

Eric
 

Re: Unwelcome advice

Eric Grange writes:
Quote
"developers should start thinking about tens, hundreds, and thousands of
cores now in their algorithmic development and deployment pipeline"

blogs.intel.com/research/2008/06/unwelcome_advice.php

Massive multicore already exists today in the form of stream processors
(GPUs), with budding languages to exploit them like CUDA

www.nvidia.com/object/cuda_home.html#

Of course CodeGear & Delphi will be busy enough catching up with Unicode
or 64bit support for the next couple of releases... but any comments on
what could be in store for Delphi to go beyond multi-threading?

Eric
"3# And the question of the Borland Leadership Team was this: In what respect
does your input enhance the four phase application lifecycle management delivery
system threshold holistic architecture initiative deployment validation baseline
best practice success driver?"
Priceless :)
Lee
 

Re: Unwelcome advice

Lee Jenkins writes:
Quote
Eric Grange writes:
>"developers should start thinking about tens, hundreds, and thousands
>of cores now in their algorithmic development and deployment pipeline"
>
>blogs.intel.com/research/2008/06/unwelcome_advice.php
>
>Massive multicore already exists today in the form of stream
>processors (GPUs), with budding languages to exploit them like CUDA
>
>www.nvidia.com/object/cuda_home.html#
>
>Of course CodeGear & Delphi will be busy enough catching up with
>Unicode or 64bit support for the next couple of releases... but any
>comments on what could be in store for Delphi to go beyond
>multi-threading?
>
>Eric

"3# And the question of the Borland Leadership Team was this: In what
respect does your input enhance the four phase application lifecycle
management delivery system threshold holistic architecture initiative
deployment validation baseline best practice success driver?"

Priceless :)

Oopes, sorry folks. This should have been for the "Sons of Kahn" post. :)
Lee
 

Re: Unwelcome advice

But they don't take advantage of multicore, which is even more valuable wrt
server apps, and server apps are the future.
James
 

Re: Unwelcome advice

Quote
What are you going to do in 3
years from now when the new processors run 128 or 256 cores each as
powerful
as a current low level machine but you can only use one of the cores
properly
because the language you use doesn't support parallization of even the
most
basic tasks?
Use one of the cores, and lease out the others to apps which can use
multicore, across disparate machines!
 

Re: Unwelcome advice

"Hannes Danzl[NDD]" <XXXX@XXXXX.COM>writes
Quote
>Intel needs to worry about it. Google and Yahoo need to worry about it.
>I
>hope Codegear keeps their ear to the ground with respect to what
>customers
>are doing but doesn't get too distracted with it.

Frankly: I think you're wrong. with 16 core mass market cpus on the
horizon,
quad cores in normal consumer hands and practivally every new machine
already
at least a dual core, there's no time to loose to get on that wagon.
I'm not convinced that parallelization is relevant to most present day
desktop applications. Most business type native desktop apps are close to
instantaneous already. I am happy to have 16 cores, but I suspect that I
have no work for them. When I compare my 3.2GHz P4 and Q6600, I am hard
pressed to tell the difference. So while I do believe we will find uses for
more computing power, I don't yet see those applications on my PC. Perhaps
better speech recognition? Do you have any suggestions? Perhaps your
speciality can use parallel tasks.
I see no point in parallelization of the tiny, trivial pieces of most apps
which are complete in microseconds. As you divide finer and finer, the
overhead of creating and managing the threads exceeds the work done by the
threads. That leaves the less common computation intensive tasks as
candidates for parallelization. The classics are image processing, mp3
encoding.
Any statement by Intel is more than just a comment on the science of chip
making- it is also a means to influence the market and advance the fortunes
of Intel. Intel's problem is that they can make more cores than most
software can use, and their business depends on obsoleting last year's
product. We don't have to repeat everything we read in the tech press -
especially when you consider how silly are most of the poplar tech
journalists. Microsoft has often told us what the future holds - and often
been wrong.
I suspect that single CPU throughput will continue to increase. Graphene
transistors for example. A single core of the top of the line Intel
processors is now much faster than even a couple of years ago.
The issue of power consumption does concern me. A modern home PC with a
graphics processor is a room heater. Possibly parallelization can be used
to reduce power.
Faster storage would make more difference to me than more cores - hard disk
spin-up time and access time are the delays I notice.
By all means, add parallel processing features to your favourite language.
It may be of use to some of us, but I don't see it revolutionising ordinary
desktop software.
Roger Lascelles
 

Re: Unwelcome advice

"Roger Lascelles" <relATaantDOTcomDOTau>writes
Quote

I'm not convinced that parallelization is relevant to most present day
desktop applications. Most business type native desktop apps are close to
instantaneous already.
Hi Roger,
I agree with you on this point. There will certainly be _some_ applications
that will benefit tremendously from parallelization, but I suspect most
desktop apps won't. One of the guys I used to work with once told me that
any effort to optimize software for him would be wasted once the compute
time was as fast as he could go outside and smoke a cigarette. While that's
far too slow for most folks these days, his point is well taken. My main
app takes _maybe_ a second to do a complex calculation, and that would
probably be when you happen to click the go button right when the OS wanted
to do something. There's not much point in trying to make it faster than
that. The users surely wouldn't notice.
Cheers,
Van
 

Re: Unwelcome advice

"Lee Jenkins" <XXXX@XXXXX.COM>writes
Quote
Lee Jenkins writes:
>
>"3# And the question of the Borland Leadership Team was this: In what
>respect does your input enhance the four phase application lifecycle
>management delivery system threshold holistic architecture initiative
>deployment validation baseline best practice success driver?"
>
>Priceless :)
>

Oopes, sorry folks. This should have been for the "Sons of Kahn" post. :)
Ah, but it fits like a glove. :^)
Cheers,
Van
 

Re: Unwelcome advice

Quote
The issue of power consumption does concern me. A modern home PC with a
graphics processor is a room heater. Possibly parallelization can be used
to reduce power.
That's exactly were my point was. Parallelization is currently looking to be
taking over from making single cores faster and faster and thus more complex.
It looks like cores are going to become simpler, probably more specialized
(e.g. having 16 cores specialized for floating point, 16 cores for specialized
for integer, etc). And that concerns me cause to run an application on such a
beast it needs to follow certain architectural guidelines and setups.
Quote
By all means, add parallel processing features to your favourite language.
It may be of use to some of us, but I don't see it revolutionising ordinary
desktop software.
Well, we talk about that in 5 years from now when office 2013 comes with voice
recognition, 3D gesture UI, etc ... increase the power of the hardware and the
software will find a way to use it. Don't worry about that.
--