Board index » delphi » Master/Detail problem with CacheUpdates

Master/Detail problem with CacheUpdates

I have a master/detail form working with CacheUpdates.  When a record is
inserted I take the highest value for the primary key from the table (a
SELECT MAX()+1 query).
When the record is posted I check for the highest value again, and if the
value has changed (if someone has posted a record) I update the field in
the detail records and then update the value in the master record.

This is not working. When the values in the detail table are updated,
automatically the rows are ejected from the dataset (beacuse of the
master/detail relation between the tables) but when updating the value in
my master table, the details are not automatically reinserted in the
dataset.

How could I handle this situation?
Any help would be really appreciated.
--
Rafael Vargas.-
r...@oscarbbs.com

 

Re:Master/Detail problem with CacheUpdates


Rafael,

Complete the posting for the Master Table including the part where you obtain
a unique primary key. THEN post the records to the detail tables. This can
all be done through one procedure if you make use of the exception handlers.

The part about getting a unique key is the real problem in this scenario.
Some databases are better than others. I use Sybase SQL Anywhere and it has a
built-in capability for this. If you are using xBase, Paradox or even with
some SQL servers you will need to code your own GetKey routine. Make
absolutely sure that the routine is an atomic operation to prevent key
violations. Some people will create a MasterKey table to store the next key
value. Then when they need a key they lock the table record replace it with
the next value and unlock it to make it atomic.

Glenn.

Quote
Rafael Vargas wrote:
> I have a master/detail form working with CacheUpdates.  When a record is
> inserted I take the highest value for the primary key from the table (a
> SELECT MAX()+1 query).
> When the record is posted I check for the highest value again, and if the
> value has changed (if someone has posted a record) I update the field in
> the detail records and then update the value in the master record.

> This is not working. When the values in the detail table are updated,
> automatically the rows are ejected from the dataset (beacuse of the
> master/detail relation between the tables) but when updating the value in
> my master table, the details are not automatically reinserted in the
> dataset.

> How could I handle this situation?
> Any help would be really appreciated.
> --
> Rafael Vargas.-
> r...@oscarbbs.com

Re:Master/Detail problem with CacheUpdates


Rafael,

Please reply only to the newsgroup not to personal E-mail.

Quote
>>>>>> Copied from E-mail message:

I'm using Oracle 7.3.2, and if offers the capability of having a generator
(it works like a serial number).  The main problem is
the secuence of the records, I can't jump any number in the secuence.  And,
because of the multi-user environment, this could
be done very easy.

I was studying the MAST App Demo but it updates the MasterKey table every
time the insert button is pressed, and this
actions cost as much as a SELECT MAX from a small table.

I'm still trying to see why, when the changes are posted, if there's a record
with the same primary key and I update the
datasets with a new value, this values are not taken automatically.
<<<<<<

If you validate your data at the local level then the Generator will only be
incrementing when you are posting valid data and there should not be any
gaps. At least theoretically.

The example you viewed from the MAST App Demo, is the correct way to acquire
a unique PKey from a lookup table. The operation of determining the key and
updating it in the lookup table MUST BE ATOMIC. That is once the process
starts NOTHING can interrupt it adn no other operation can be made on the
lookup table by any other process. Unless this is atomic you can not assure
that the new key you select was also selected by another process. The time to
select the key from a lookup table is really trivial compared to the SELECT
MAX() from a small table. Your Master table is not likely to be small in any
case so if you try to do a SELECT MAX() on that table it will definitely be
no faster than from the lookup table.

Glenn.

Re:Master/Detail problem with CacheUpdates


Rafael,

I think you are making life too difficult <g>. Why not set up a
separate single-record table to hold the last used key number (or the
next in sequence). When a new master record is inserted, try to open
the key number table with Exclusive set to true, retrieve the number,
increment it, assign it to the new record and update the value in the
key number table.

That way it's impossible for more than one user to open the key number
table at the same time, so you know you have a unique number and you
don't have to check again before posting the new records.

--
Stephen Brown

Quote
Rafael Vargas wrote in message <01be010a$539cac70$a123f4cd@rvargas>...
>I have a master/detail form working with CacheUpdates.  When a record
is
>inserted I take the highest value for the primary key from the table
(a
>SELECT MAX()+1 query).
>When the record is posted I check for the highest value again, and if
the
>value has changed (if someone has posted a record) I update the field
in
>the detail records and then update the value in the master record.

>This is not working. When the values in the detail table are updated,
>automatically the rows are ejected from the dataset (beacuse of the
>master/detail relation between the tables) but when updating the
value in
>my master table, the details are not automatically reinserted in the
>dataset.

Re:Master/Detail problem with CacheUpdates


Glen, sorry for the mail, it was a mistake.

Look, If I get my PK from the small table and update the table with a new
value, and another user gets the next value (and updates the table, too) If
I cancel the record, then the secuence is lost. That's the problem I see in
the MAST App.

By the time this message is post, I had already found a solution. <It was
*my* fault>.
In the BeforePost event of the Detail table I was assigning the value of
the master's PK, but when Apllying Updates, if the value from the master
table was inserted and it had to be changed, then when trying to update the
values for the detail table the BeforePost took the old value, and records
were assigned to the previously inserted record.

Thanks for replying.
--
Rafael Vargas
DBA,
Bratex Dominicana C. por A.
r...@oscarbbs.com

Glenn Casteran <nis...@att.net> wrote in article
<363656C7.4B97C...@att.net>...

Quote
> Rafael,

> Please reply only to the newsgroup not to personal E-mail.

> >>>>>> Copied from E-mail message:
> I'm using Oracle 7.3.2, and if offers the capability of having a
generator
> (it works like a serial number).  The main problem is
> the secuence of the records, I can't jump any number in the secuence.
And,
> because of the multi-user environment, this could
> be done very easy.

> I was studying the MAST App Demo but it updates the MasterKey table every
> time the insert button is pressed, and this
> actions cost as much as a SELECT MAX from a small table.

> I'm still trying to see why, when the changes are posted, if there's a
record
> with the same primary key and I update the
> datasets with a new value, this values are not taken automatically.
> <<<<<<

> If you validate your data at the local level then the Generator will only
be
> incrementing when you are posting valid data and there should not be any
> gaps. At least theoretically.

> The example you viewed from the MAST App Demo, is the correct way to
acquire
> a unique PKey from a lookup table. The operation of determining the key
and
> updating it in the lookup table MUST BE ATOMIC. That is once the process
> starts NOTHING can interrupt it adn no other operation can be made on the
> lookup table by any other process. Unless this is atomic you can not
assure
> that the new key you select was also selected by another process. The
time to
> select the key from a lookup table is really trivial compared to the
SELECT
> MAX() from a small table. Your Master table is not likely to be small in
any
> case so if you try to do a SELECT MAX() on that table it will definitely
be
> no faster than from the lookup table.

> Glenn.

Re:Master/Detail problem with CacheUpdates


Quote
Rafael Vargas wrote:
> Look, If I get my PK from the small table and update the table with a new
> value, and another user gets the next value (and updates the table, too) If
> I cancel the record, then the secuence is lost. That's the problem I see in
> the MAST App.

I do not see a reliable way around that problem. If you are creating invoices,
then the normal course of action is to void the invoice and show it as VOIDED
whenever reported.

Once you start the posting, you get a master key and post the master record.
From this point forward the posting should complete. If you have detail record
entries then you should fix them and repost with the same master key. In the
event that the posting is killed during the detail fix, then just void the
master entry.

Make sure that the process of getting or changing a master key is atomic. I
repeat this often because it is so frequently missed or misunderstood.

Glenn.

Re:Master/Detail problem with CacheUpdates


I already solved the problem. What I do is to take the value from the
SELECT MAX, not the Secuences Table, and do the capture with this number.
Then, when applying updates, check to see if this value already exists, if
not the apply goes succesful but, if exist I make another SELECT MAX to
take the next available value, lock the table, and make the post.

I've been proving it, and till now it's been working fine.

--
Rafael Vargas
DBA,
Bratex Dominicana C. por A.
r...@oscarbbs.com

Glenn Casteran <nis...@att.net> wrote in article
<363912E2.631F5...@att.net>...

Quote
> Rafael Vargas wrote:

> > Look, If I get my PK from the small table and update the table with a
new
> > value, and another user gets the next value (and updates the table,
too) If
> > I cancel the record, then the secuence is lost. That's the problem I
see in
> > the MAST App.

> I do not see a reliable way around that problem. If you are creating
invoices,
> then the normal course of action is to void the invoice and show it as
VOIDED
> whenever reported.

> Once you start the posting, you get a master key and post the master
record.
> From this point forward the posting should complete. If you have detail
record
> entries then you should fix them and repost with the same master key. In
the
> event that the posting is killed during the detail fix, then just void
the
> master entry.

> Make sure that the process of getting or changing a master key is atomic.
I
> repeat this often because it is so frequently missed or misunderstood.

> Glenn.

Re:Master/Detail problem with CacheUpdates


It seems like you are doing this the hard way, since you have to potentially
get the key twice.

If you use this method, LOCK THE TABLE before you do the final Key check and
record post. Just because the Key is available when you check it does not mean
it will still be available when you THEN lock the table and post. You really
have no control over the activity that takes place during that timespan. While
that time may seem small and unlikely to cause you a problem, it is still a
problem that you don't want to deal with later.

Glenn.

Quote
Rafael Vargas wrote:
> I already solved the problem. What I do is to take the value from the
> SELECT MAX, not the Secuences Table, and do the capture with this number.
> Then, when applying updates, check to see if this value already exists, if
> not the apply goes succesful but, if exist I make another SELECT MAX to
> take the next available value, lock the table, and make the post.

> I've been proving it, and till now it's been working fine.

> --
> Rafael Vargas
> DBA,
> Bratex Dominicana C. por A.
> r...@oscarbbs.com

Re:Master/Detail problem with CacheUpdates


Sure I don't to deal with problems like this! :)
Thanks for the advice, I'm gonna make some tests right now to see the
results.
Thanks again.
--
Rafael Vargas
DBA,
Bratex Dominicana C. por A.
r...@oscarbbs.com

Glenn Casteran <nis...@att.net> wrote in article
<363D3869.6B3FF...@att.net>...

Quote
> It seems like you are doing this the hard way, since you have to
potentially
> get the key twice.

> If you use this method, LOCK THE TABLE before you do the final Key check
and
> record post. Just because the Key is available when you check it does not
mean
> it will still be available when you THEN lock the table and post. You
really
> have no control over the activity that takes place during that timespan.
While
> that time may seem small and unlikely to cause you a problem, it is still
a
> problem that you don't want to deal with later.

> Glenn.

> Rafael Vargas wrote:

> > I already solved the problem. What I do is to take the value from the
> > SELECT MAX, not the Secuences Table, and do the capture with this
number.
> > Then, when applying updates, check to see if this value already exists,
if
> > not the apply goes succesful but, if exist I make another SELECT MAX to
> > take the next available value, lock the table, and make the post.

> > I've been proving it, and till now it's been working fine.

> > --
> > Rafael Vargas
> > DBA,
> > Bratex Dominicana C. por A.
> > r...@oscarbbs.com

Other Threads