Dictionary sticking

I am saving a masterdictionary of dictionaries to tinydb.
after uploading to an API i extract a copy of a dictionary from the masterdict and make changes, then put it back in the masterdict and put that back in the tinyDB, the next time i access the dictionary, i get the old version.
The second time i make the changes and save it, it works.
Any Ideas?
My code is quite large and I don't think sharing an aia would help (4k blocks).

NOTE:
PCS stands for "plant_check_sheet" and is a dictionary.
PCSMasterDict is a dictionary containing the PCS dictionaries.

The sequence of activity is

make original record dictionary (PCS),
add PCS to PCSMasterDict
add PCSMasterDict to TinyDB['PCS']
iterate PCSMasterDict for each PCS
if PCS[uploaded] = false
send PCS dict to API, (only a reduced version of the PCS is sent)
get back response from POST with PK ( Version A - this contains pk )
get copy of original PCS from PCSMasterDict ( Version B - no pk and it shouldn't)
add "pk" and "uploaded=true" to copy of PCS ( Version C,- B with "pk=PK" and "uploaded=true")
save updated copy of PCS to PCSMasterDict ( Version D - Same as C )
save PCSMasterDict to TinyDB[PCS] ( Version E - Same as C )

At the next iteration of the PCSMasterDict, the key, the value seen is the old (version B) instead of the expected version C with the primary key.

Can anyone see what i am doing wrong?

Large dictionary? There might not be sufficient time for a previous dictionary call to finish prior to your modification calls. Use a Clock to supply several hundred ms of delay might work.

The Clock technique works using a dictionary to parse an online json api file; something similar might work for you. "second time i make the changes and save it, it works"... yes, I had similar issue before the delay was supplied.

1 Like

The dictionaries are quite small.
When I access the temporary dict (GotOrigPCSDict) it contains the pk value, but the masterdict where I tried to save the temp dict doesn't have the pk, even a few minutes later.
It is like the next read is from an outdated cache of the masterdict.

If volume is a problem, you might consider storing your data in separate TinyDB NameSpaces (xml files), with the Master containing the NameSpace names.

I assume unpacking and repacking a small TinyDB would be sufficiently faster than unpacking and repacking a Master TinyDB with all your data to pay for the file open/close of the new NameSpace.

2 Likes

I was wondering about this, I didn't know if multiple TinyDBs would interfere with each other.
I might try move uploaded dicts to a different store.
It is just a very strange behaviour to write a new dict and read it's old version.

Your .aia export file is our only reliable guide to your data flow.

I think Steve is on to something with the blocking call, i can't write as it is reading the dictionary. So writing out put to a different dictionary could remove the problem. Honestly the .aia is so full of code it would take a novel to explain the flow. I am only starting the novel.(mindmap in freemind)

As an alternative, you could try ditching the dictionaries and going straight to fine grained TinyDB compound tags/values, like in this sample project:

Your TinyDB tags would be text JOINs of major to minor dictionary keys, separated by slashes (/), with attribute names interspersed as needed, giving you ultimate fine granularity.

That is an interesting method, thanks for that ABG.
Thanks also to SteveJG for your input.

Here is an extra thought for handling multiple API calls:

Use separate Web components for different API calls, to take advantage of simpler targetted response events.

Name the Web components by function (INSERT/SELECT/DELETE) and datum type for clarity.

The dictionary blocks are atomic, each one will finish before the next one starts, so that is unlikely to be the issue. Is there a particular reason you're copying the dictionary only to modify the copy and write it back to the original dictionary rather than updating in place?

I suppose the reason I am making the copy, editing the copy and replacing the original subdict was to do with transactions. I didn't want a possible case where one value is updated in the dictionary, then the app closes before the other value is changed, leaving the dictionary corrupted.

At the moment, after I insert the new modified copy, when i go to retrieve the copy later(10seconds), it gives me the old version.
( always failure on first loop, leading to second loop)
Then the process is repeated, API call, receive response, copy original dict, modify, save back
(=> success on second loop, 2nd primary key is stored, database has 2 entries for each item).
It feels like a cache is somehow involved with the a repeated set of different outcomes for the same sequence.

Anyway, I have avoided the issue by using a completely separate dictionary with the same keys
to record details on which dicts have been uploaded to API and which have updates that need to be uploaded.
Thanks again guys.