Board index » delphi » Re: Why would I use .NET?

Re: Why would I use .NET?


2006-10-28 03:32:54 PM
delphi103
Quote
I'm trying to find an unbiased answer to a simple question: Aside from
security
issues, what kind of problem does .NET 2.0 solves or will solve in the
case of
3.0?

All I seem to hear around me is either generic answers or some fanatic
answers
both for and against. Why would someone be interested in .NET programming
at
all?
I actually like .NET quite a bit. I find myself quite productive with the
platform. I am not going to argue whether it is more productive than VCL or
Java. I liked VCL in the VB6 days but switched to .NET when it came out. I
like Java as a platform but not the language (it is different enough than C#
that I notice it). Specifically, why would you want to use .NET? It really
depends on what kinds of problems you are trying to solve, whether you are
concerned about the future of the vendor, the shear number of people that
use the language.
I'm specifically an ASP.NET developer and find .NET to be particularily well
suited for high-load, high-volume server type applications. I have written
(in C#) a communication server (chat, doodle, voice) that can, on a single
(dual-core CPU with gigs and gigs of RAM) simultaneously handle with
absolute ease over 100,000 active sessions.
From a business-object architecture, I don't know whether .NET shines above
Java or Delphi in the sense that a busines object is really just an entity
with certain aspects (Create, Delete, Update, Insert, Post, Reverse,
Calculate, etc.)... with some properties. Hardly anything revolutionary,
but when you need to get into other domains (Web Services, XML, Transactions
(not just database transactions, but even transactions on string
manipulation or your own custom types of transactions), X509, Asyncronous
and/or queued operatons, etc) than .NET really shines.
From a heavy numerical crunching perspective, .NET really doesn't shine as
well as a native langauge would. If you are writing your own video codec or
atomic nuclear-reaction simulation, than you may not use pure managed
code... you could use a mixed mode C++ scenario with much ease to get
performance in the calcuations and then access to the CLR.
If you want the features of Windows Workflow (which is a basically a NextGen
BizTalk built into the framework) then .NET is your best choice as 3.0 is
designed with this in mind. If you want access to the latest vector display
system that supercedes GDI and GDI+ and uses DirectX under the hood, you
probly can use native code if you want but it'll be so much easier in .NET.
If you want the *Query functionality (to query XML, object hiearchies, data,
or whatever else just like SQL quires relational databases) then .NET is the
only way to go until 3 years later when DevCo incorporates the feature (into
the .NET flavor of Delphi unless they decide to implement it in Delphi
native as well but unlikely (Just my opinion).
If you want business-to-business type applications, you really aren't
limited to .NET. .NET certainly makes web services much more intuitive IMO
than Java/C++/Delphi (currently) but you can use .NET just as a broker for
B2B and use some other langauge for your end of the transaction (if you so
choose).
My particular reasons for liking .NET is because I truly enjoy the platform
but I also find my employment with it. Its important to me to be able to
move to other employers if the need arises and its easier with .NET/Java
than with Delphi (here in the USA). I do like BCB/Delphi but I spend all my
time in the MS world though that is starting to change for reasons I won't
get into here (in the interest of staying on topic).
I can not give you any *excellent* reasons to use .NET. They are all
subjective and trust me, you will need at least 6-12 months of {*word*155}
full-time work in some challenging .NET projects to become familiar enough
with the framework, idioms, and especially the quirks in order to truly make
a decision for yourself whether or not you like it. I hated it at first but
as I learned it and it solved more and more of my problems that weren't easy
in VB6 or C++ or my (limited) experience at the time with VCL, I began to
really love it and learn to push its boundries.
I even created an 8-bit NES emulator capable of playing all NES games in C#.
The first version really sucked but then as I began to understand very
intimate details of the CLR and JIT and such, and changed the architecture
of the CPU core and graphics display in the emulator, I went from 5 FPS on a
3Ghz machine to 60+ FPS on a 1.7Ghz machine quite easily. I haven't
publicly released the emulator (yet), but I am working on a 16-bit SNES clone
and its doing pretty well.
So .NET is capable in performance terms and network scalability and even web
service scalability and such, but I think it only lacks from real {*word*155}
bonafied numerical crunching and some Win32/Win64 platform support regarding
how they implemented WinForms. But even then, I have very much worked around
its limitations without *much* effort.
My advice: give it a try. Give it time, though... give it a real chance,
and then make a decision. Asking in a newsgroup where the majority of
participants are heavily biased against .NET and want little to do with it
won't get you the kind of answers your looking for (unless your looking for
support and comradorie regarding shunning .NET).
Thanks,
Shawn
 
 

Re: Why would I use .NET?

Quote
I think that using .NET inside the DB is more of a marketing{*word*142}than real
value.
I shudder to think what'll happen in a year or two when we start encountered
CLR SPROCS but they do have their place. The application we use stored the
access permissions in a very specific binary format that needs to be decoded
whenever a permission in the application is requested. This is usually
handled in source code which then controls access to the database. But for
some very good reasons, we had a need for the database to need to decode the
permissions as well. We ended up writing a T-SQL UDF that does some intense
bit-twidling and binary operations which really took us almost a year to
completely debug wheras in C# its less than 30 lines of code. T-SQL just
does not express binary very easily, despite having had 3 DBA's work on this
UDF and two programmers over the course of the year.
Using a CLR SPROC in 2005 completely solves that problem. We can
expose/replicate the original C# function that does the decode as a CLR
UDF/SPROC and the problem only takes a day or so to completely
debug/test/deploy/forget.
For most other uses, I don't see compelling reasons to use CLR integration
in SQL Server.
Thanks,
Shawn
 

Re: Why would I use .NET?

Quote
Just see Debian benchmark comparing C# to Pascal:

shootout.alioth.debian.org/debian/benchmark.php

Here is a benchmark on windows with the similar results:

dada.perl.it/shootout/

Managed code is more then 15 times slower and uses 15 times more memory
then native pascal code.

Can you show a respectable benchmark that proves otherwise?

I've seen many "benchmarks" usually "proving" that .NET is slower. A couple
things most "benchmarks" (didn't look at the above linked) are really good
at is:
1) they usually use 1-to-1 source mapping as closely as possible. This has
the net effect of not taking advantage of the language/platform
optimizations that would improve drastically the results. It is good at
proving what a direct port might do, but is very lousy at comparing the
languages/platforms. The solution is to rewrite the code to take bonafide
advantage of the platform and its features. I created an 8-bit NES emulator
that sucked at first, but changing the design of the virtual CPU and knowing
more about the JIT and CLR itself so drastically improved performance that I
can't tell that its written in 100% C# vs. C++.
2) The benchmarks usually start counting from the time the .NET
application/component starts up. That is wrong. The first time the
application starts it has to JIT the MSIL code. The proper way to benchmark
for performance is to "warm-up" the .NET application and make sure all the
code-paths that will be touched during the bench mark have been touched and
then commence the comparison, oetherwise you are figuring the initial JIT
into the performance which is very lousy thing to do if you truly want to be
objective. Or, they can simply NGEN the application and avoid the JIT
altogether, but most benchmarks don't do that.
3) suffer from poor understanding of .NET in general and so create a poor
benchmark
4) fail to create a real-world test. Timing something like
for (int i = 0; i < 10000000; i++)
value += i << 2; // i * 2
is a louse real-world example. I have never written something like that in my
15 years of professionally programming production code.
I could give other examples but this thread wearies me. I have never seen a
.NET benchmark that has truly proved to me that they are trying to truly
capture pure objectivism and understand in the benchmark how to write the
test to truly take advantage of each platform and squeeze it for all its
worth. 1:1 ports of code are a very poor way to benchmark.
I do .NET for a living and while its not perfect, I have never had a
performance or stability complaint and we service millions of transactions
each day in our application (we write a very full-features insurance
application and integrated accounting system). There has not been a single
time in my 4 years with the company (and started when the project was only 2
months old) where we regretted .NET for any reason whatsover. While there
were many doubters during the first year (and even those who left with the
prediction that the project would fail because we use .NET), once our
understand and command of the platform matured, we learned there is nothing
we can not achieve and there are no performance issues that adversely effect
our aggressive performance and scalability requirements/objectives.
Thanks,
Shawn
 

Re: Why would I use .NET?

Quote
On the other hand, as others have mentioned, you have to install the .NET
framework
to use .NET, and the code can be slower. Since code for .NET is compiled
to native
code the first time it is run, and will use the native code after that
until you change the
executable, it theoritically won't be that much slower, unless the managed
API's you
use are slower than the equivalent Win32 ones you use. I suspect that
today they are
slower, but I also expect that MS will invest a lot of money in making
them faster over
time, as their company depends on it. Finally, no matter what they do,
garbage collection
will occasionally slow down your app for short periods. Depending on how
you write
your app this delay can be appreciable.
One thing to keep in mind that most lost site of, is that .NET is JITd once
during the first time it encounters a particular code-path. Throughout the
entire life of the running application it can keep on encountering new
code-paths and thus it'll keep on JITing. Each time a previously JITd
codepath is exected it is using native code, however. That can be mitigated
using NGEN, except for ASP.NET, which has its own rules regarding JIT/NGEN.
But when running code, more things are also going on: first, the runtime
might be checking for overflows on all numerical computations such that
byte x = byte.Max; x++; will by default through an outofrange exception, but
you can
unchecked { byte x = byte.Max; x++; } and x will wrap around to its minimum
value. Doing so will improve performance drastically (it was one of the
things I did in my C# 8-bit NES emulator to nearly double performance from
that alone (in all places where calculations where happening).
In other cases, using an Int32 (or C# int or VB(Visual Basic) Integer) on a 32-bit
processor instead of short/byte/long also drastically improves performance,
not so much because of the JIT in this case but because of the CPU itself
and how it likes working with 32-bit data instead of 16-bit data or what
have you.
In other cases, depending on what you are trying to do, the CLR might have
JITd some code to check permissions before it honors your request, there is
overhead for that. Completely configurable, of course, but by default, the
security checks are there.
Next, when you call into a function in the .NET library somewhere you have
no idea what it is doing internally. So yes, .NET might be "slower" but is
it because its is JITing {*word*99}py code for you or because the foundational
functions its calling are doing a lot of processing internally? Is it
because you are in DEBUG more instead of RELEASE mode?
Code that you write that uses very little dependances or rock blazin' fast
once JITd. When it is the framework itself causing sluggishness, its always
fair game to better optimize in the future and MS has done a great job thus
far.
Further, there are sometimes architectural constrainst that have nothing to
do with .NET at all. For example, .NET uses GDI+ by default for its
drawing. GDI+ is not completely hardware accelerated as GDI+ itself is. So
when a gradient is drawn to the screen it is doing it in software render
mode which is painfully slow. But, if you call that same GDI+ routine from
the native win32 version of GDIPlus.dll, you'l have identical performance
characteristics (in C/C++/Delphi of all things).
Lastly, most .NET applications are I/O bound (networking/disk/data/sound
output/mouse input/keyboard input) such that there are for more idle time
than there is processing time. Thus, nanosecond/millisecond level
optimizations are hardly a concern. I'd take macro optimzation (design
and maintenance related issues) over micro optimization (pure timing) any
day, where performance is generally good enough (by good enough I mean it
because rediculously stupid to make code harder to maintain and read just to
squeeze 3% performance improvements out of a loop that only executed 5% of
the time anyway (or 20% for that matter)).
Thanks,
Shawn
 

Re: Why would I use .NET?

Jon Shemitz writes:
Quote
But delivering functionally identical apps in less time means that
development is cheaper.
I think this point sums up nicely why .NET hasn't seen the same level
of adoption rate with Delphi developers as it has with Visual Studio
developers. We don't get anywhere near the same level of productivity
improvements that developers coming from Visual Studio 6 got.
In fact, I have spoken to people who find developing .NET apps is
actually a slower and less productive process than developing Delphi
Win32 ones. But due to Microsoft doing such a great job in marketing
.NET as a "safe" environment, they're being forced by client demands to
use .NET, despite it being an inferior development platform overall in
their eyes.
--
Cheers,
David Clegg
XXXX@XXXXX.COM
cc.borland.com/Author.aspx
QualityCentral. The best way to bug Borland about bugs.
qc.borland.com
"Asleep at the switch? I wasn't asleep, I was drunk." - Homer Simpson
 

Re: Why would I use .NET?

Jon Shemitz writes:
Quote
if you don't
need to unload your plugins, you just load the plugin assembly (using
Assembly methods like LoadFrom); call GetExportedTypes to scan the
public types; and use Activator.CreateInstance to create an instance
of the Type that is assignment compatible with your plugin interface.
Then you cast the returned object to the plugin interface, and away
you go.

Something like..
Thanks!
Quote
There's always
<www.devsource.com/article2/0,1759,1790388,00.asp>;-)
Gee, I wonder why you gave that link? <g>
Thanks again.
--
Dave Nottage [TeamB]
 

Re: Why would I use .NET?

Shawn B. writes:
Quote
3) suffer from poor understanding of .NET in general and so create a poor
benchmark
I guess you didnīt take a close enougth look at Debian benchmark. If you
donīt like the current C# code there, you can contribute your own (if
itīs better). This assures itīs fair.
 

Re: Why would I use .NET?

Shawn B. writes:
Quote
I've never had a performance or stability complaint and we service millions of transactions
each day in our application
But then again, saying this particular App runs fast now proves nothing
about if it would be faster if you had implemented on native code.
 

Re: Why would I use .NET?

Shawn B. writes:
Quote
>Just see Debian benchmark comparing C# to Pascal:
>
>shootout.alioth.debian.org/debian/benchmark.php
>
>Here is a benchmark on windows with the similar results:
>
>dada.perl.it/shootout/
>
>Managed code is more then 15 times slower and uses 15 times more memory
>then native pascal code.
>
>Can you show a respectable benchmark that proves otherwise?
>

I've seen many "benchmarks" usually "proving" that .NET is slower. A couple
things most "benchmarks" (didn't look at the above linked) are really good
at is:
Maybe you should "look at the above linked"? One of the things that
"wearies" me is people making generalizations about benchmarks in
response to specific comments about specific benchmarks.
If you had looked, you would have seen that the computer language
shootout comparison between Mono C# and Free Pascal /does not/ show
"managed code is more then 15 times slower and uses 15 times more
memory".
Most of the Mono C# programs are only 1.5x - 3x slower than the
corresponding Free Pascal programs. (The obvious exception is the
"hello world" program we use to give an indication of the difference in
program startup time.)
And the difference is somewhat less with the newer Mono C# on Intel
shootout.alioth.debian.org/gp4/benchmark.php
You might even have noticed that the Mono C# programs are naive, and
the Free Pascal programs have been progressively tuned ;-)
Quote

1) they usually use 1-to-1 source mapping as closely as possible. This has
the net effect of not taking advantage of the language/platform
optimizations that would improve drastically the results. It is good at
proving what a direct port might do, but is very lousy at comparing the
languages/platforms. The solution is to rewrite the code to take bonafide
advantage of the platform and its features. I created an 8-bit NES emulator
that sucked at first, but changing the design of the virtual CPU and knowing
more about the JIT and CLR itself so drastically improved performance that I
can't tell that its written in 100% C# vs. C++.

2) The benchmarks usually start counting from the time the .NET
application/component starts up. That is wrong. The first time the
application starts it has to JIT the MSIL code. The proper way to benchmark
for performance is to "warm-up" the .NET application and make sure all the
code-paths that will be touched during the bench mark have been touched and
then commence the comparison, oetherwise you are figuring the initial JIT
into the performance which is very lousy thing to do if you truly want to be
objective. Or, they can simply NGEN the application and avoid the JIT
altogether, but most benchmarks don't do that.

3) suffer from poor understanding of .NET in general and so create a poor
benchmark

4) fail to create a real-world test. Timing something like

for (int i = 0; i < 10000000; i++)
value += i << 2; // i * 2

is a louse real-world example. I have never written something like that in my
15 years of professionally programming production code.

I could give other examples but this thread wearies me. I have never seen a
.NET benchmark that has truly proved to me that they are trying to truly
capture pure objectivism and understand in the benchmark how to write the
test to truly take advantage of each platform and squeeze it for all its
worth. 1:1 ports of code are a very poor way to benchmark.

I do .NET for a living and while its not perfect, I have never had a
performance or stability complaint and we service millions of transactions
each day in our application (we write a very full-features insurance
application and integrated accounting system). There has not been a single
time in my 4 years with the company (and started when the project was only 2
months old) where we regretted .NET for any reason whatsover. While there
were many doubters during the first year (and even those who left with the
prediction that the project would fail because we use .NET), once our
understand and command of the platform matured, we learned there is nothing
we can not achieve and there are no performance issues that adversely effect
our aggressive performance and scalability requirements/objectives.


Thanks,
Shawn
 

Re: Why would I use .NET?

Felipe Monteiro de Carvalho writes:
Quote
Shawn B. writes:
>3) suffer from poor understanding of .NET in general and so create a poor
>benchmark

I guess you didnīt take a close enougth look at Debian benchmark. If you
donīt like the current C# code there, you can contribute your own (if
itīs better). This assures itīs fair.
I took a better look
- Which benchmarks show Free Pascal 15x faster than Mono C#?
- How many other benchmarks are there?
 

Re: Why would I use .NET?

Isaac Gouy writes:
Quote
If you had looked, you would have seen that the computer language
shootout comparison between Mono C# and Free Pascal /does not/ show
"managed code is more then 15 times slower and uses 15 times more
memory".
I just put the numbers on a calculator and mono C# is 7,5 times slower
and uses 21,3 times more memory.
 

Re: Why would I use .NET?

Shawn B. writes:
Quote
>Just see Debian benchmark comparing C# to Pascal:
>
>shootout.alioth.debian.org/debian/benchmark.php
>
>Here is a benchmark on windows with the similar results:
>
>dada.perl.it/shootout/
>
>Managed code is more then 15 times slower and uses 15 times more memory
>then native pascal code.
>
>Can you show a respectable benchmark that proves otherwise?
>

I've seen many "benchmarks" usually "proving" that .NET is slower. A couple
things most "benchmarks" (didn't look at the above linked) are really good
at is:

1) they usually use 1-to-1 source mapping as closely as possible. This has
the net effect of not taking advantage of the language/platform
optimizations that would improve drastically the results. It is good at
proving what a direct port might do, but is very lousy at comparing the
languages/platforms. The solution is to rewrite the code to take bonafide
advantage of the platform and its features. I created an 8-bit NES emulator
that sucked at first, but changing the design of the virtual CPU and knowing
more about the JIT and CLR itself so drastically improved performance that I
can't tell that its written in 100% C# vs. C++.

2) The benchmarks usually start counting from the time the .NET
application/component starts up. That is wrong. The first time the
application starts it has to JIT the MSIL code. The proper way to benchmark
for performance is to "warm-up" the .NET application and make sure all the
code-paths that will be touched during the bench mark have been touched and
then commence the comparison, oetherwise you are figuring the initial JIT
into the performance which is very lousy thing to do if you truly want to be
objective. Or, they can simply NGEN the application and avoid the JIT
altogether, but most benchmarks don't do that.
Could it be that JIT is sometimes faster than NGEN?
Mono C# AOT compared to Mono C# JIT
shootout.alioth.debian.org/gp4sandbox/benchmark.php
Quote

3) suffer from poor understanding of .NET in general and so create a poor
benchmark

4) fail to create a real-world test. Timing something like

for (int i = 0; i < 10000000; i++)
value += i << 2; // i * 2

is a louse real-world example. I have never written something like that in my
15 years of professionally programming production code.

I could give other examples but this thread wearies me. I have never seen a
.NET benchmark that has truly proved to me that they are trying to truly
capture pure objectivism and understand in the benchmark how to write the
test to truly take advantage of each platform and squeeze it for all its
worth. 1:1 ports of code are a very poor way to benchmark.

I do .NET for a living and while its not perfect, I have never had a
performance or stability complaint and we service millions of transactions
each day in our application (we write a very full-features insurance
application and integrated accounting system). There has not been a single
time in my 4 years with the company (and started when the project was only 2
months old) where we regretted .NET for any reason whatsover. While there
were many doubters during the first year (and even those who left with the
prediction that the project would fail because we use .NET), once our
understand and command of the platform matured, we learned there is nothing
we can not achieve and there are no performance issues that adversely effect
our aggressive performance and scalability requirements/objectives.


Thanks,
Shawn
 

Re: Why would I use .NET?

Felipe Monteiro de Carvalho writes:
Quote
Isaac Gouy writes:
>If you had looked, you would have seen that the computer language
>shootout comparison between Mono C# and Free Pascal /does not/ show
>"managed code is more then 15 times slower and uses 15 times more
>memory".

I just put the numbers on a calculator and mono C# is 7,5 times slower
and uses 21,3 times more memory.
Felipe, what "numbers" did you just put on a calculator?
 

Re: Why would I use .NET?

Quote

Personally, I couldn't care less about shiny, cute icons. I want things to
work well and unless there is a good reason for a change, why force it on us
all? I dislike being needing to stop thinking about my code and notice the
tools. From what I read, Vista discards the venerable Alt-Tab means of
switching apps. What can they be thinking?

My boss always says, and it annoys me too, you're thinking like a
programmer. Users love shiny, cute icons.
Craig.
 

Re: Why would I use .NET?

Quote

Plus you need admin rights to install it. Much easier to just drop an exe in
some user-directory and go on from there.

To be fair, not many apps these days, even those developed in Delphi,
can be just shipped as an exe and run.
Quote

Can you imagine how bad this will be in a couple years, if MS continues
coming up with new frameworks at the current rate?

"Hey, I have got 1.0,1.1,2.0,3.0,3.5,4.5 installed" - "You need 4.1, sorry!"

In 5 years there have been only 3 versions released (1.0, 1.1 & 2.0) so
I doubt there is going to be huge issues of 10's of different versions
for a while yet.