Board index » delphi » Re: Memory Leak?

Re: Memory Leak?


2006-12-13 01:36:18 PM
delphi278
CommitRetaining by definition keeps the transaction active. To close the
transaction you must commit or rollback.
Bob writes:
Quote
Quinn Wildman (CodeGear Developer Support) writes:

>CommitRetaining is not a good choice. You probably want commit
>instead. By design, commitretaining will continue to use more memory
>until the transaction is committed or rolled back.

Should a CommitRetaining ultimately be closed up with a hard COMMIT?
Or can you consider things done after the CommitRetaining?

 
 

Re: Memory Leak?

Quinn Wildman (CodeGear Developer Support) writes:
Quote
CommitRetaining by definition keeps the transaction active.
Since IB 7.0, doing a COMMIT RETAINING in a READ COMMITTED transaction
will start a new transaction while preserving the context of the old
one. (Since the context of the old one was READ COMMITTED and you just
committed your changes, the practical effect of this is simply that the
server has a new transaction but to the client it looks like the old
one is still running.)
Without READ COMMITTED it is as Quinn describes.
With all that said, however, I think the "leak" is on the client side
in this case.
--
Craig Stuntz [TeamB] ?Vertex Systems Corp. ?Columbus, OH
Delphi/InterBase Weblog : blogs.teamb.com/craigstuntz
Everything You Need to Know About InterBase Character Sets:
blogs.teamb.com/craigstuntz/articles/403.aspx
 

Re: Memory Leak?

Paul writes:
Quote
So, what's the difference between CommitRetaining and doing nothing?
The difference is that when you call CommitRetaining the changes made
up to that point are committed and cannot be rolled back or lost. The
transaction continues.
CommitRetaining was added to provide a way to commit a transaction
without closing all of the datasets that were opened within the
transaction context. If faced with this problem today no one would even
consider adding CommitRetaining to InterBase because the right way to
solve this problem is to use ClientDataSets for editing.
--
Bill Todd (TeamB)
 

Re: Memory Leak?

"Quinn Wildman (CodeGear Developer Support)"
<XXXX@XXXXX.COM>writes:
Quote
CommitRetaining is not a good choice. You probably want commit instead.
By design, commitretaining will continue to use more memory until the
transaction is committed or rolled back.
So, what's the difference between CommitRetaining and doing nothing?
Paul...
--
plinehan __at__ yahoo __dot__ __com__
XP Pro, SP 2,
Oracle, 9.2.0.1.0 (Enterprise Ed.)
Interbase 6.0.1.0;
When asking database related questions, please give other posters
some clues, like operating system, version of db being used and DDL.
The exact text and/or number of error messages is useful (!= "it didn't work!").
Thanks.
Furthermore, as a courtesy to those who spend
time analysing and attempting to help, please
do not top post.
 

Re: Memory Leak?

I have found this code and was wondering if there is a memory leak
associated with szErrText?
It seems to be allocated memory every time it is called but is only
freed in the default case.
void __fastcall GetErrorString(int nError, char *&szErrText)
{
szErrText = (char*)malloc(255);
switch(nError)
{
case LINEERR_ALLOCATED:
strcpy(szErrText,"LINEERR_ALLOCATED");
break;
case LINEERR_BADDEVICEID:
strcpy(szErrText,"Line Error - Bad Device ID");
break;
.
.
.
default:
free(szErrText);
DWORD dwReturn = FormatMessage(FORMAT_MESSAGE_ALLOCATE_BUFFER
// source and processing options
|FORMAT_MESSAGE_FROM_SYSTEM
|FORMAT_MESSAGE_IGNORE_INSERTS,
NULL, // pointer to
message source
nError, // requested message
identifier
0, // language
identifier for requested message
(LPTSTR)&szErrText, // pointer to
message buffer
0, // Minimum size of
message buffer
NULL); // address of array
of message inserts
}; // End of Switch
}
Roger
 

Re: Memory Leak?

It is not a memory leak. The problem is more insidious.
The calling argument of the function is a reference to a char*. Presumably
the variable is to contain the memory which is allocated to hold the error
message.
If the error matches that of one of the case statements then the function
returns with szErrText containing a pointer to a 255 character block
allocated by a call to the C++ library function 'malloc' or NULL if the call
to 'malloc' failed.
If the error does not match, then the default statement is executed, the
memory allocated with 'malloc' is freed and 'FormatMessage' calls
'LocalAlloc' to allocate an array of characters for the error message.
Because a reference to char* is the calling argument, it is effectively used
as the return value of the function, letting the calling code free the
allocated memory. However this causes a problem. Memory allocated with
'malloc' must be freed with a call to 'free'. Memory allocated with
'LocalAlloc' is not allocated via the C++ library and should be freed with a
call to 'LocalFree' and not by a call to 'free'. Nothing in what the
function does provides an indication to the caller of what must be used to
free the allocated memory.
. Ed
Quote
Roger writes
news:XXXX@XXXXX.COM...
I have found this code and was wondering if there is a memory leak
associated with szErrText?

It seems to be allocated memory every time it is called but is only freed
in the default case.


void __fastcall GetErrorString(int nError, char *&szErrText)
{
szErrText = (char*)malloc(255);
switch(nError)
{
case LINEERR_ALLOCATED:
strcpy(szErrText,"LINEERR_ALLOCATED");
break;
case LINEERR_BADDEVICEID:
strcpy(szErrText,"Line Error - Bad Device ID");
break;
.
.
.
default:
free(szErrText);

DWORD dwReturn = FormatMessage(FORMAT_MESSAGE_ALLOCATE_BUFFER //
source and processing options
|FORMAT_MESSAGE_FROM_SYSTEM
|FORMAT_MESSAGE_IGNORE_INSERTS,
NULL, // pointer to message
source
nError, // requested message
identifier
0, // language identifier
for requested message
(LPTSTR)&szErrText, // pointer to message
buffer
0, // Minimum size of
message buffer
NULL); // address of array of
message inserts
}; // End of Switch
}
 

Re: Memory Leak?

Thanks, Ed
 

Re: Memory Leak?

"Roger" <XXXX@XXXXX.COM>writes
Quote
I have found this code and was wondering if there is a
memory leak associated with szErrText?
That depends on how the memory block is being used after
GetErrorString() has exited. The memory block is being passed back to
the caller, so it is the caller's responsibility to free it when it is
done using it.
Quote
It seems to be allocated memory every time it is called but is only
freed in the default case.
When nError is set to an unknown number, GetErrorString() is freeing
the memory block it allocated and then is having FormatMessage()
allocate its own memory block instead. Which in itself is another
bug, because now there are 2 different memory managers involved, and
the caller has no way to know which one was actually used to allocate
the memory block is being returned. So there is a 50/50 risk that the
caller can not free the memory correctly.
There are three ways to fix this:
1) change GetErrorString() to use LocalAlloc() instead of malloc().
FormatMessage() uses LocalAlloc() internally. This way, the caller
can use LocalFree() regardless of which error number is processed:
void __fastcall GetErrorString(int nError, char* &szErrText)
{
szErrText = (char*) LocalAlloc(LPTR, 256);
if( !szErrText ) return;
switch( nError )
{
case LINEERR_ALLOCATED:
strncpy(szErrText, "LINEERR_ALLOCATED", 255);
break;
case LINEERR_BADDEVICEID:
strncpy(szErrText, "Line Error - Bad Device ID", 255);
break;
//...
default:
FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS, NULL, nError, 0, szErrText, 255, NULL);
break;
}
}
{
char *lpErrMsg;
GetErrorString(SomeErrorCode, lpErrMsg);
//...
LocalFree(lpErrMsg);
}
2) have GetErrorString() continue to use malloc(), but do not free it
when calling FormatMessage(). Simply have FormatString() fill it in
instead of allocating a new memory block. Then the caller can use
free() regardless of the error code, ie:
void __fastcall GetErrorString(int nError, char* &szErrText)
{
szErrText = (char*) malloc(256);
if( !szErrText ) return;
memset(szErrText, 0, 256);
switch( nError )
{
case LINEERR_ALLOCATED:
strncpy(szErrText, "LINEERR_ALLOCATED", 255);
break;
case LINEERR_BADDEVICEID:
strncpy(szErrText, "Line Error - Bad Device ID", 255);
break;
//...
default:
FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS, NULL, nError, 0, szErrText, 255, NULL);
break;
}
}
{
char *lpErrMsg;
GetErrorString(SomeErrorCode, lpErrMsg);
//...
free(lpErrMsg);
}
3) have the caller allocate its own memory buffer any way it wishes,
and then have GetErrorString() simply fill it in. That way, the
caller knows exactly how the memory was allocated and can free it
accordingly, ie:
void __fastcall GetErrorString(int nError, char *szErrText, int
iMaxLen)
{
memset(szErrText, 0, iMaxLen);
switch( nError )
{
case LINEERR_ALLOCATED:
strncpy(szErrText, "LINEERR_ALLOCATED", iMaxLen);
break;
case LINEERR_BADDEVICEID:
strncpy(szErrText, "Line Error - Bad Device ID",
iMaxLen);
break;
//...
default:
FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS, NULL, nError, 0, szErrText, iMaxLen,
NULL);
break;
}
}
{
char szErrMsg[256];
GetErrorString(SomeErrorCode, szErrMsg, 256);
//...
}
{
char *szErrMsg = new char[256];
GetErrorString(SomeErrorCode, szErrMsg, 256);
//...
delete[] szErrMsg;
}
Now, with that said - keep in mind that FormatMessage() is
Unicode-enabled when the UNICODE condition is set for the project.
You will notice in the examples above that I am calling
FormatMessageA() directly instead of FormatMessage() generically.
Since GetErrorString() is forcing the caller to always use a char*
string, it should force FormatMessage() to do the same, or else the
code will fail to compile when UNICODE is enabled, as FormatMessage()
would be expecting a wchar_t* instead of a char*. If you want to
allow Unicode, though, then GetErrorString() should accept an LPTSTR
parameter instead of a char* parameter, ie:
#ifdef UNICODE
#define strncpy_t wcsncpy
#else
#define strncpy_t strncpy
#endif
void __fastcall GetErrorString(int nError, LPTSTR szErrText, int
iMaxLen)
{
memset(szErrText, 0, sizeof(TCHAR) * iMaxLen);
switch( nError )
{
case LINEERR_ALLOCATED:
strncpy_t(szErrText, TEXT("LINEERR_ALLOCATED"),
iMaxLen);
break;
case LINEERR_BADDEVICEID:
strncpy_t(szErrText, TEXT("Line Error - Bad Device
ID"), iMaxLen);
break;
//...
default:
FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS, NULL, nError, 0, szErrText, iMaxLen,
NULL);
break;
}
}
{
TCHAR szErrMsg[256];
GetErrorString(SomeErrorCode, szErrMsg, 256);
//...
}
{
LPTSTR szErrMsg = new TCHAR[256];
GetErrorString(SomeErrorCode, szErrMsg, 256);
//...
delete[] szErrMsg;
}
Gambit
 

Re: Memory Leak?

Wow! Remy, thanks for the comprehensive answer.
Roger
 

Re: Memory Leak?

"Alien" <XXXX@XXXXX.COM>writes
Quote
Using Delphi7

I am using FastMM4 and it shows thet I am getting a memory leak with the
code below.
What can I do to fix it with your help that is.

procedure TProject1.GetMyPostedMid; ///
var
P, Q, Temp: PChar;
XFace: array[0..1024] of Char;
begin
P := ReadWrite.GetText; // ReadWrite is a TStringList
Q := StrNew(P);
begin
P := StrPos(Q, 'Message-ID: ');
if P <>nil then
P := P + 12;
Temp := StrPos(P, #13#10);
Temp^ := #0;
StrLCopy(XFace, P, SizeOf(XFace) - 1);
Memo1.Lines.Add(Trim(XFace)); // Adds the Message-ID: <blblbl@watever>to
Memo1
end;
end;
--
-A-l-i-e-n-
Q:=StrNew(p);
this line allocates memory to create a new instance of P
you need to free this memory.