Board index » cppbuilder » Re: Division by zero and different compilers

Re: Division by zero and different compilers


2005-08-27 02:41:23 PM
cppbuilder19
Andreas Hausladen wrote:
Quote
[...]
This little program compiled with bcc32 throws an "unknown software
exception" (aka division by zero).

#include <stdio.h>

double getValue() { return 2.1; }

int main()
{
double x = getValue();
double y = getValue();

int z = 10 / (x - y);
printf("%d\n", z);
return 0;
}

But if I compile this code with msvc it prints a "0".
Yet. But it doesn´t return 0 it returns: 1.#INF000000000000 = INFINITE.
Quote

So which compiler is right?

Both - your program has a cast double ->integer, which converts the
INFINITE value to 0.
The program might check the returned value and handle errors without
exception handling, since the exception thrown is a structured exception
by windows. Catching structured exceptions is not part of the C++ standard.
If Windows throws an structured exception or returns an INFINITE value
can be controlled by changing the floating point control word.
(controlfp, _controlfp, control87, _control87).
The difference between Borland and Microsoft is that Borland by default
let Windows throw exceptions and Microsoft will let it return INFINITE
values.
Change your program to the following:
#include "stdafx.h"
#include <stdio.h>
#include <iostream>
double getValue() { return 2.1; }
int main()
{
double x = getValue();
double y = getValue();
double z = 10 / (x - y);
double xx = 0;
std::cout << z << std::endl;
return 0;
}
and it will return
1.#INF
And by the way: If you have an integer divide by zero both compilers
will (let Windows) throw a structured exception.
Andre
 
 

Re:Re: Division by zero and different compilers

Gord wrote:
Quote
Andreas Hausladen wrote:


>So which compiler is right?


10 divided by zero is clearly not zero. It is definitely equal to infinity.
I would absolutely NOT want a compiler to return a 0 after such a calculation.

No compiler does return 0.
VC returns INFINITE. But since this value is casted to an integer type
the printed result cannot be infinite.
VC7 casts to 0. VC8 casts to -1 (0xFFFFFFFF).
Haven't checked to which value BCB casts, if FPU exceptions are disabled.
Andre
 

Re:Re: Division by zero and different compilers

You are correct - wrong memory on my part. It's overflow and not divide by
zero that is masked out.
. Ed
Quote
Alex Bakaev wrote in message
news:430fa9b1$ XXXX@XXXXX.COM ...

>The integer divide by zero exception is masked out.
>
I don't think so. Just run this:

int a = 0;
printf( "%d\n", 10 / a );
 

{smallsort}

Re:Re: Division by zero and different compilers

But if only one thread has an exception and the other threads are operating okay for
some apps keeping all the other threads operating might make more sense. Picture some
app that processes requests from hundreds of users. Picture an RDBMS for example.
Want to crash the whole thing? Homie don't think so.
Thomas Maeder [TeamB] wrote:
Quote

IMHO, throwing an exception upon a division by zero is about as bad an
idea as throwing an exception when a pointer is dereferenced that
shouldn't. In both cases, the program logic has failed, causing the
state of the entire process to be invalid; in most situations, the
best reaction is to kill the process. Whereas exceptions are a good
idea for situations that you can normally recover from.
 

Re:Re: Division by zero and different compilers

Randall Parker < XXXX@XXXXX.COM >writes:
Quote
But if only one thread has an exception and the other threads are
operating okay for some apps keeping all the other threads operating
might make more sense. Picture some app that processes requests from
hundreds of users. Picture an RDBMS for example. Want to crash the
whole thing? Homie don't think so.
A few remarks:
1) a multi-hundred user database would not implement a
thread-per-connection strategy for connection management. Threads
do not scale that well.
2) if you have a logic bug in your application, and you can not
determine the intregity of the running process, do you really want
it to be writing into hundreds of records for those hundreds of
users some data that is possibly corrupted? Once in an undefined
state, you do not and can not determine what data is corrupted and
what data is ok to use.
3) Of course we don't want to crash the whole thing. But in this
case, it has ALREADY CRASHED, for all practical purposes.
Continuing to run is simply operating in denial. It does not make
the corruption go away just because you don't like it. The thing
which you don't want to have happen has already happened, so now
what?
It's an invalid process that might offer the illusion to people
that it didn't actually crash, but it's lying to anyone who
believes that. Operations might even appear to work, but overall
it has potential for making things a whole lot worse.
--
Chris (TeamB);
 

Re:Re: Division by zero and different compilers

Chris Uzdavinis (TeamB) wrote:
Quote
Randall Parker < XXXX@XXXXX.COM >writes:


>But if only one thread has an exception and the other threads are
>operating okay for some apps keeping all the other threads operating
>might make more sense. Picture some app that processes requests from
>hundreds of users. Picture an RDBMS for example. Want to crash the
>whole thing? Homie don't think so.


A few remarks:

1) a multi-hundred user database would not implement a
thread-per-connection strategy for connection management. Threads
do not scale that well.
I didn't say each of the connected users would have a thread just for them.
Quote

2) if you have a logic bug in your application, and you can not
determine the intregity of the running process, do you really want
it to be writing into hundreds of records for those hundreds of
users some data that is possibly corrupted? Once in an undefined
state, you do not and can not determine what data is corrupted and
what data is ok to use.
Again, depends on the exception. A divide by zero in an application programmer
defined SQL script to process an insert request does not say anything about the
integrity of DB2.
Quote

3) Of course we don't want to crash the whole thing. But in this
case, it has ALREADY CRASHED, for all practical purposes.
Continuing to run is simply operating in denial. It does not make
the corruption go away just because you don't like it. The thing
which you don't want to have happen has already happened, so now
what?
Again, has it really already crashed? Or has just one request crashed?
Quote
It's an invalid process that might offer the illusion to people
that it didn't actually crash, but it's lying to anyone who
believes that. Operations might even appear to work, but overall
it has potential for making things a whole lot worse.
Not necessarily. Again, I can think of many scenarios where the exception is not an
indication that everything is invalid.
Also, in some apps the user really would prefer to keep operating. Suppose it is a
reporting system. Suppose some graphics display module throws an exception. The user
might just want to try displaying the report some other way and see if that works.
 

Re:Re: Division by zero and different compilers

Randall Parker < XXXX@XXXXX.COM >writes:
Quote
Again, depends on the exception. A divide by zero in an application
programmer defined SQL script to process an insert request does not
say anything about the integrity of DB2.
I'm not sure what you mean by application programmer defined SQL
script. Is this C++ code that did the divide by zero? (I missed the
early part of this thread.)
Quote
>3) Of course we don't want to crash the whole thing. But in this
>case, it has ALREADY CRASHED, for all practical purposes.
>Continuing to run is simply operating in denial. It does not make
>the corruption go away just because you don't like it. The thing
>which you don't want to have happen has already happened, so now
>what?

Again, has it really already crashed? Or has just one request crashed?
If it's C++ code that did the division by zero, then your program is
in an undefined state. Since all threads share the same address
space, and physical memory, the entire process is corrupted (or should
be considered corrupted) if the program ever performs undefined
behavior in any thread.
Quote
>It's an invalid process that might offer the illusion to people
>that it didn't actually crash, but it's lying to anyone who
>believes that. Operations might even appear to work, but overall
>it has potential for making things a whole lot worse.

Not necessarily. Again, I can think of many scenarios where the
exception is not an indication that everything is invalid.
If it's undefined behavior, you cannot make _any_ claims about the
resulting state of your program.
You might look at the assembler code and confirm that it will work
anyway, but that is taking advantage of a compiler implementation. In
general, you cannot rely on it working, or continuing to work, and in
the future it may very well stop. If you recompile, there is nothing
that guarantees the compiler will even generate the same assembler
code.
Quote
Also, in some apps the user really would prefer to keep
operating.
All users prefer programs to keep running. But that doesn't mean they
understand the issues. It also doesn't mean that it will do what they
want even if it keeps running. But it might.
Quote
Suppose it is a reporting system. Suppose some graphics display
module throws an exception. The user might just want to try
displaying the report some other way and see if that works.
If the graphics display module is C++ code running in your process's
address space, then its bug is your bug, and your program is
undefined.
The specific application doesn't matter all that much. If it's a
reporting system, it could report wrong values. Or it might report
the expected results. If it's graphics, it may display the image, or
garbage.
Maybe there are situations where you may want to postpone, if
possible, restarting, but those cases would involve no writing data to
persistant storage, and no consequences if the user sees garbage
instead of meaningful output. I'd never generate a report in a known
undefined program.
--
Chris (TeamB);
 

Re:Re: Division by zero and different compilers

Chris Uzdavinis (TeamB) wrote:
Quote
Randall Parker < XXXX@XXXXX.COM >writes:


>Again, depends on the exception. A divide by zero in an application
>programmer defined SQL script to process an insert request does not
>say anything about the integrity of DB2.


I'm not sure what you mean by application programmer defined SQL
script. Is this C++ code that did the divide by zero? (I missed the
early part of this thread.)
Someone can write a stored proc in some sort of interpreted language. But the
interpreted language could get interpreted by C code (I'm assuming that DB, MS SQL,
etc are written in C/C++ and I think this is true). Well, the interpreter could
obviously test very denominator before dividing. Or it could not do so and just have
a handler to handle the case.
Quote
If it's C++ code that did the division by zero, then your program is
in an undefined state. Since all threads share the same address
space, and physical memory, the entire process is corrupted (or should
be considered corrupted) if the program ever performs undefined
behavior in any thread.
But if you write a handler that basically gives that event a defined result it is no
longer entirely undefined behavior.
Quote
>>It's an invalid process that might offer the illusion to people
>>that it didn't actually crash, but it's lying to anyone who
>>believes that. Operations might even appear to work, but overall
>>it has potential for making things a whole lot worse.
>
>Not necessarily. Again, I can think of many scenarios where the
>exception is not an indication that everything is invalid.


If it's undefined behavior, you cannot make _any_ claims about the
resulting state of your program.
If your compiler throws an exception when it hasppens it is not an undefined
behavior. It has a very defined behavior: throw an exception.
Quote
You might look at the assembler code and confirm that it will work
anyway, but that is taking advantage of a compiler implementation. In
general, you cannot rely on it working, or continuing to work, and in
the future it may very well stop. If you recompile, there is nothing
that guarantees the compiler will even generate the same assembler
code.


>Also, in some apps the user really would prefer to keep
>operating.


All users prefer programs to keep running. But that doesn't mean they
understand the issues. It also doesn't mean that it will do what they
want even if it keeps running. But it might.
They understand that customer is king. I do what customers want me to do. They want
to not exit the program.
Look at Java. It has throwable exceptions as part of its semantics. Exceptions in
Java are considered normal. Too normal in fact. There was a big debate on the JDOM
list a few years back on whether JDOM should throw exceptions when an element or an
attribute is not found. I argued against some big name people who wanted exceptions
to be thrown. Eventually (and fortunately) a more restrictive use of exception
throwing was decided upon.
Mind you, I do not like Java exceptions. But I do not think it makes sense to equate
an undefined behavior with throwing an exception. Seems to me that unless the
condition causes unpredictable side effects why not handle it if you can and keep on
going? Why not just make the behavior defined if the engineering trade-off seems
worth it?
Say a dereference of a bad pointer causes an exception. The bad pointer has to be
recognized as bad in order to throw the exception in the first place. Suppose it is
recognized as bad because it the program tries to write to a location that is write
protetcted. Well, it did not corrupt anything by doing so since an exception happened
instead of the write.
You could claim that the bad pointer is an indication that some bug exists that might
have caused other bad things to happen. True enough. But, first of all, all programs
have bugs. Some happen routinely. For example, BCB gives me exceptions sometimes when
I compile. I often have to Build All to get around the exceptions. I do not want BCB
to automatically exit when it hits an internal exception.
Some types of exceptions are used by operating systems routinely as essential to how
the operating system. functions. For example, trying to read a memory loc that is
paged out of virtual memory? The exception handler brings the data in from a swap file.
 

Re:Re: Division by zero and different compilers

Randall Parker wrote:
Quote
Chris Uzdavinis (TeamB) wrote:

>Randall Parker < XXXX@XXXXX.COM >writes:
>
[...]
Say a dereference of a bad pointer causes an exception. The bad pointer
has to be recognized as bad in order to throw the exception in the first
place. Suppose it is recognized as bad because it the program tries to
write to a location that is write protetcted. Well, it did not corrupt
anything by doing so since an exception happened instead of the write.


The point is you don't know where the access violation you catched comes
from and how it has corrupted your program.
You know where and why C++ exceptions are thrown, since you have to
throw them by your own. You'll never know why and where a Win32
exception has been thrown.
And a Win32 exception, which simply cannot be handled in standard C++.
E.g. VC++ had the bad behavior to catch even access violations with
catch (...) if asynchronous exception handling had been enabled.
At first i thought this was a good idea. Since it didn't support __try
__except for functions with local objects as BCB.
But it was a really bad idea. The program had bugs but i didn't
recognize it, because each thread had a catch(...). I only recognized
them when i tested the BCB version or configured the IDE to stop on
access violations.
So far continuing a program after an access violation (in the same
process - not in a started child process) is not a good idea.
It's far more better to write a handler executed on access violations which
a) Write a dump file of the application
b) Try to write backup data
c) Output an error message
d) Terminate with an error result
e) Let Windows restart the program if it's a service
After that the customer might be angry somewhat, but not that angry if
the program will destroy valid data or continue without responding anymore.
The developer has the chance to debug the dump file and fix the bug.
Most BCB (only) programmers don't know that much about dump files, since
it's not that easy to convert a map file to a *.pdb file and the IDE
doesn't support debugging them directly.
I don't know how Borland debugs the access violations reported by it's
beta testers. But i think dump files are the easiest and most effective
way to do that and it would be a pity if DeXter wouldn't support them
natively, since Windows permanently writes them and sends them to the
developer, if it's allowed to.
And Windows Vista has another context menu to write the current state of
the application to a dump file.
Even if one would follow the strategy to continue after access
violations, at least a dump file can be written and analyzed before the
applications continues.
Perhaps the best strategy would be to let the user decide to continue
and potentially corrupt data or to terminate it's application.
Andre
 

Re:Re: Division by zero and different compilers

Andre Kaufmann wrote:
Quote
it would be a pity if DeXter wouldn't support them natively,
since Windows permanently writes them and sends them to the
developer, if it's allowed to.
If an exception happend in Delphi 2005 you get a call stack, windows
version, versions of loaded libraries and a "Send" button you can press.
--
Regards,
Andreas Hausladen
(andy.jgknet.de/blog)
 

Re:Re: Division by zero and different compilers

Alex Bakaev [TeamB] wrote:
Quote
Ed Mulroy wrote:

>Floating point division by zero is supposed to raise an exception.
>
>If I remember correctly the language standard specifies that integral
>divide by zero be ignored.
>
The CPU will raise an exception in case of integer divide by zero. For
floating point the result will be NaN, I think.

.a
Floating point division by zero should result in Inf (infinity). The
IEEE standard defines both a positive and minus infinity. You can get
NaN (Not a Number) by taking the square root of -1.0
Jim Dodd
Onset Computer Corp.
 

Re:Re: Division by zero and different compilers

Randall Parker < XXXX@XXXXX.COM >writes:
Quote
Someone can write a stored proc in some sort of interpreted
language. But the interpreted language could get interpreted by C
code (I'm assuming that DB, MS SQL, etc are written in C/C++ and I
think this is true). Well, the interpreter could obviously test very
denominator before dividing. Or it could not do so and just have a
handler to handle the case.
In that situation, then, if the interpreter itself does a division by
zero, you may have undefined behavior due to the interpreter.
However, if the interpreter detects the problem itself and reports it
as a normal error in the language it defines, then quite likely it's
not a problem for your host application.
Quote
But if you write a handler that basically gives that event a defined
result it is no longer entirely undefined behavior.
In the case where undefined behavior occurs, there is no recovery,
period. You're writing your position from the perspective that "if
you handle it, it's ok", but that doesn't work. It's only ok if it
(undefined behavior) never happens in the first place.
Quote
If your compiler throws an exception when it hasppens it is not an
undefined behavior. It has a very defined behavior: throw an
exception.
That's not true. UNSPECIFIED behavior leaves the implementation
decision up to the compiler vendor. UNDEFINED behavior, however,
cannot ever have expected results. This is part of the definition of
the C++ language, and cannot be overridden by any vendor.
For example, on unix, when your program has performed an illegal
operation (for example, dereferencing a null pointer), you are given a
SIGSEGV signal, which is an indication that you're about to be
shutdown due to misbehavior. If you trap that signal and essentially
ignore it, that does not undo the damage that was done. Similarly,
when Windows sends an access violation to your program, it means "you
crashed". Catching it, like the VCL does, unfortuantely, does NOT mean
your program is safe to continue simply because you caught that
exception.
Quote
They understand that customer is king. I do what customers want me to
do. They want to not exit the program.
To satisfy their desires, it is necessary to write the program without
undefined behavior in the first place. Don't try to handle the
problem after it occurrs, because that cannot be done. Instead,
perform some checks BEFORE doing something that is potentially
invalid, and then don't do it if your tests detect that a problem
would occur. Division by zero, for example, is easy to detect before
it occurs, by inspecting the denominator prior to performing the
operation.
Quote
Look at Java. It has throwable exceptions as part of its
semantics. Exceptions in Java are considered normal. Too normal in
fact. There was a big debate on the JDOM list a few years back on
whether JDOM should throw exceptions when an element or an attribute
is not found. I argued against some big name people who wanted
exceptions to be thrown. Eventually (and fortunately) a more
restrictive use of exception throwing was decided upon.
You're confusing a normal runtime error with a program failure. A
real exception is thrown by the application's code. Undefined
behavior has unknown semantics, ranging from doing nothing noticeable
at all (though possibly still with severe corruption), to receiving a
signal/"structured exception" from the OS. But in neither case is an
exception (in the C++ sense of the word) actually thrown. That the
VCL translates structured exceptions into VCL exceptions is, IMHO, a
grave disservice to their customers who don't understand the issues.
Quote
Mind you, I do not like Java exceptions. But I do not think it makes
sense to equate an undefined behavior with throwing an
exception.
I do not equate them at all. I'm saying that if you can detect
"undefined behavior", throwing an exception does not sufficiently
"handle" the problem.
Quote
Seems to me that unless the condition causes unpredictable
side effects why not handle it if you can and keep on going? Why not
just make the behavior defined if the engineering trade-off seems
worth it?
You won't like this answer, because it is pedantic, but because the
definition of the language clearly states that there are no
requirements on a program after undefined behavior occurs, there are
no requirements on a program after undefined behavior occurs. That
simply is how C++ works.
Quote
Say a dereference of a bad pointer causes an exception.
No, it does not cause an exception. The *only* way for an exception
to be caused is to write C++ code that evaluates a "throw"
expression.
Quote
The bad pointer has to be recognized as bad in order to throw the
exception in the first place.
And if it's not recognized, then silently your program writes into
memory that it shouldn't, and you have a corrupted runtime
environment. Buffer overrun, anyone?
Quote
Suppose it is recognized as bad because it the program tries to
write to a location that is write protetcted. Well, it did not
corrupt anything by doing so since an exception happened instead of
the write.
Once you perform undefined behavior you cannot expect any reasonable
behavior afterwords, unless perhaps, in some environment the compiler
did clearly define what happens. However, the majority of cases do
not have such documentation from compiler vendors, and certainly, an
OS signal is not a C++ exception, and it's wrong to confuse the two.
Quote
You could claim that the bad pointer is an indication that some bug
exists that might have caused other bad things to happen. True
enough.
Ok, we are in agreement here.
Quote
But, first of all, all programs have bugs.
This does not excuse them. And no, not all programs have bugs like
that. Some bugs are simply missing features, or logic bugs that
perform the wrong calculation. However, memory-corruption bugs are
*never* something we can just accept.
Quote
Some happen routinely. For example, BCB gives me exceptions
sometimes when I compile.
I often have to Build All to get around the exceptions. I do not
want BCB to automatically exit when it hits an internal exception.
www.datanation.com/fallacies/conseq.htm
Quote
Some types of exceptions are used by operating systems routinely as
essential to how the operating system. functions. For example, trying
to read a memory loc that is paged out of virtual memory? The
exception handler brings the data in from a swap file.
You're confusing "structured exceptions" with C++ exceptions. The two
are completely seperate, unrelated ideas.
--
Chris (TeamB);
 

Re:Re: Division by zero and different compilers

Chris Uzdavinis (TeamB) wrote:
Quote
That's not true. UNSPECIFIED behavior leaves the implementation
decision up to the compiler vendor. UNDEFINED behavior, however,
cannot ever have expected results. This is part of the definition of
the C++ language, and cannot be overridden by any vendor.
Not quite.
Undefined behaviour can result in anything at all - including defnined
bahviour on a specific platform if a vendor chooses to define it. This
is not prohibitted, nor is it particularly portable.
Unspecified behaviour usually gives a range of expected behaviours,
without nailing down which one will be required. For instance, it is
unspecified if stack unwinding occurs before calling terminate if an
exception is not caught by main(). A vendor is not required to
document their choice, but the range of behaviours is limitted - stack
unwinding may or may not happen but std::terminate WILL be called and
the application will exit in the specified manner.
AlisdairM(TeamB)
 

Re:Re: Division by zero and different compilers

Chris Uzdavinis (TeamB) wrote:
Quote
That's not true. UNSPECIFIED behavior leaves the implementation
decision up to the compiler vendor. UNDEFINED behavior, however,
cannot ever have expected results. This is part of the definition of
the C++ language, and cannot be overridden by any vendor.
Vendors can and do turn undefined behaviors from the standard into defined behaviors.
Quote
For example, on unix, when your program has performed an illegal
operation (for example, dereferencing a null pointer), you are given a
SIGSEGV signal, which is an indication that you're about to be
shutdown due to misbehavior. If you trap that signal and essentially
ignore it, that does not undo the damage that was done. Similarly,
when Windows sends an access violation to your program, it means "you
crashed". Catching it, like the VCL does, unfortuantely, does NOT mean
your program is safe to continue simply because you caught that
exception.
For some exceptions it is. If you try to dereference into write-protected memory then
you can recover from that.
Quote
To satisfy their desires, it is necessary to write the program without
undefined behavior in the first place.
But in any program that is really complex it inevitably does stuff you do not expect.
Noone can write bug-free code in any app of large size.
Quote
Don't try to handle the
problem after it occurrs, because that cannot be done. Instead,
perform some checks BEFORE doing something that is potentially
invalid, and then don't do it if your tests detect that a problem
would occur. Division by zero, for example, is easy to detect before
it occurs, by inspecting the denominator prior to performing the
operation.
But is it worth checking every pointer for not just nullness but general legalness?
Quote
You won't like this answer, because it is pedantic, but because the
definition of the language clearly states that there are no
requirements on a program after undefined behavior occurs, there are
no requirements on a program after undefined behavior occurs. That
simply is how C++ works.
But implementations do not leave as much undefined as the standard does. The argument
that "one should not continue on from a point where the standard leaves the result
undefined" is really what I'm chiefly objecting to.
Though I also have problems with always following the advice "one should not continue
on from a point where the implementation leaves the result undefined". Suppose
someone has a huge design they've built up in some software tool and they haven't
saved it yet. If we just exit they have no chance to save. Now, saving may not work.
But not giving them a chance to save seems unreasonable.
Of course saving in some apps involves running thru a large chunk of an app's code.
If an exception has occurred that leaves something undefined or the exception
suggests that all is not right with some internal data structutures saving might
fail. But why not try?
Also, some apps have huge costs of exiting. Imagine the flight computer of the F-16.
Say it has a hardware exception. Wanna exit the image? You just killed the pilot.
I've worked on programs in aerospace apps where exiting a program was simply not an
option. We tried to make the best guesses we could on what to do even if some things
started going wrong. We wrote all sorts of logic for making those guesses too.
Quote
>Say a dereference of a bad pointer causes an exception.


No, it does not cause an exception. The *only* way for an exception
to be caused is to write C++ code that evaluates a "throw"
expression.
A C++ exception is not the same as an OS exception.
Quote
Once you perform undefined behavior you cannot expect any reasonable
behavior afterwords, unless perhaps, in some environment the compiler
did clearly define what happens. However, the majority of cases do
not have such documentation from compiler vendors, and certainly, an
OS signal is not a C++ exception, and it's wrong to confuse the two.
I'm not confused on this point.
Quote
>Some happen routinely. For example, BCB gives me exceptions
>sometimes when I compile.
>I often have to Build All to get around the exceptions. I do not
>want BCB to automatically exit when it hits an internal exception.


www.datanation.com/fallacies/conseq.htm
Look, the purpose of using software is to make us more productive. I'm more
productive if BCB does not exit on an internal compiler error. Though I do not know
if Borland's IDE is responding to an exception or what. They hide that.
Ultimately the test should be "what will make us most productive", not "what
satisfies purists".
Quote
>Some types of exceptions are used by operating systems routinely as
>essential to how the operating system. functions. For example, trying
>to read a memory loc that is paged out of virtual memory? The
>exception handler brings the data in from a swap file.


You're confusing "structured exceptions" with C++ exceptions. The two
are completely seperate, unrelated ideas.
No, I've been clear on the difference all along.
 

Re:Re: Division by zero and different compilers

Andreas Hausladen wrote:
Quote
Andre Kaufmann wrote:


>it would be a pity if DeXter wouldn't support them natively,
>since Windows permanently writes them and sends them to the
>developer, if it's allowed to.


If an exception happend in Delphi 2005 you get a call stack, windows
version, versions of loaded libraries and a "Send" button you can press.


Huh ? I have Delphi 2005 too, but never saw such a dialog you are
describing.
I just wrote a little test program in D2005 causing an access violation,
but no send button. Having a callstack of the exception in the IDE is a
basic functionality of a de{*word*81}.
Your description matches a commercial component for Delphi, think it's
called madExcept.
If i start the test program outside the IDE Windows handles (as usual)
the access violation.
Have is missed something ?
A dump file is not only debugging on your local developer computer it's
- Debugging access violation occured on a client's computer
- Debugging irreproducible states of your program on a clients computer
- Having a snapshot of all objects, stack, variables threads
of your program running on a clients computer you cannot upload
a remote de{*word*81} to
Delphi 2005 generates *.pdb files, necessary to use dump files (if you
don't want to debug through assembly code only), but only for .NET
applications - AFAIK.
Andre