SharedBigValue
No edit summary
No edit summary
Line 1: Line 1:
When a server process like MDriven Turnkey, or MDrivenServer service hold several ecospaces in the same process we now (from 2023-04-09) have a mechanism called SharedBigValue.
When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now (from 2023-04-09) have a mechanism called SharedBigValue.


What this does is:
What this does is:
# If a loaded attribute value is byte[] or string  
# If a loaded attribute value is a byte[] or string  
# larger that 8192 bytes  
# Larger than 8192 bytes  
# Is the maxint in version (latest version)
# Is the maxint in version (latest version)
# shares the same object id, and attribute id
# Shares the same object id, and attribute id
# shares typesystem checksum
# Shares typesystem checksum
...if above is true the cache will really hold a SharedBigValue.
...if the above is true, the cache will really hold a SharedBigValue.


All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a the real value and return this.
All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this.


Only when objects are loaded from PS and hits the ApplyDataBlock method we consider creating or looking up SharedBigValue.
Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue.


We do this by keeping a static dictionary on the Cache that is key,SharedBigValue.
We do this by keeping a static dictionary on the Cache that is key, SharedBigValue.


If key already exists we return the existing SharedBigValue other wise we create a SharedBigValue and return it (and store it in the dictionary).
If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary).


Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create.


<nowiki>---------</nowiki>
<nowiki>---------</nowiki>


Limitations that I condsider to be ok until reality proves otherwise:
Limitations that I consider to be ok until reality proves otherwise:
# It is only db loaded (old value) that is target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this.
# It is only DB loaded (old value) that is the target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this.
# We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to not be a common production scenario.
# We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to be an uncommon production scenario.
<nowiki>----</nowiki>
<nowiki>----</nowiki>


Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in other  
Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in the other  


<nowiki>----</nowiki>
<nowiki>----</nowiki>


Expected positive effect: Only one instance of large things are held in memory even if 1000 users look at this same thing  
Expected positive effect: Only one instance of large things is held in memory even if 1000 users look at this same thing.


Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.
Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.


Currently this feature is always on, you can stop if from having effect by setting  
Currently, this feature is always on, you can stop it from having an effect by setting:
  FetchedBlockHandler.kBigValueThreshold=int.MaxValue;
  FetchedBlockHandler.kBigValueThreshold=int.MaxValue;

Revision as of 06:16, 12 April 2023

When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now (from 2023-04-09) have a mechanism called SharedBigValue.

What this does is:

  1. If a loaded attribute value is a byte[] or string
  2. Larger than 8192 bytes
  3. Is the maxint in version (latest version)
  4. Shares the same object id, and attribute id
  5. Shares typesystem checksum

...if the above is true, the cache will really hold a SharedBigValue.

All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this.

Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue.

We do this by keeping a static dictionary on the Cache that is key, SharedBigValue.

If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary).

Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create.

---------

Limitations that I consider to be ok until reality proves otherwise:

  1. It is only DB loaded (old value) that is the target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this.
  2. We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to be an uncommon production scenario.

----

Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in the other

----

Expected positive effect: Only one instance of large things is held in memory even if 1000 users look at this same thing.

Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.

Currently, this feature is always on, you can stop it from having an effect by setting:

FetchedBlockHandler.kBigValueThreshold=int.MaxValue;
This page was edited 54 days ago on 03/19/2024. What links here