SharedBigValue
No edit summary
No edit summary
 
(5 intermediate revisions by 3 users not shown)
Line 1: Line 1:
When a server process like MDriven Turnkey, or MDrivenServer service hold several ecospaces in the same process we now (from 2023-04-09) have a mechanism called SharedBigValue.
=== 2023-04-09 ===
When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now have a mechanism called SharedBigValue.


What this does is:
What this does is:
# If a loaded attribute value is byte[] or string  
# If a loaded attribute value is a byte[] or string  
# larger that 8192 bytes  
# Larger than 8192 bytes  
# Is the maxint in version (latest version)
# Is the maxint in version (latest version)
# shares the same object id, and attribute id
# Shares the same object ID, and attribute ID
# shares typesystem checksum
# Shares typesystem checksum
...if above is true the cache will really hold a SharedBigValue.
...if the above is true, the cache will hold a SharedBigValue.
* All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this.
Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue.
* We do this by keeping a static dictionary on the key Cache - SharedBigValue.
* If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary).
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create.


All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a the real value and return this.
==== Limitations I consider okay until reality proves otherwise: ====
# It is only DB loaded (old value) that is the target for SharedBigValue - thus, write/update of large blocks are handled as before - and we do not try to share this.
# We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated. This is considered to be an uncommon production scenario.
'''Ways to test:''' Model with Image and Text, run Turnkey with two different users or two different browsers, and update large text and image in one - make sure it updates in the other.


Only when objects are loaded from PS and hits the ApplyDataBlock method we consider creating or looking up SharedBigValue.
''Expected positive effect:'' Only one instance of large things is held in memory even if 1000 users look at this same thing.


We do this by keeping a static dictionary on the Cache that is key,SharedBigValue.
''Expected negative effect:'' Additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.


If key already exists we return the existing SharedBigValue other wise we create a SharedBigValue and return it (and store it in the dictionary).
Currently, this feature is always on. You can stop it from having an effect by setting:  
 
FetchedBlockHandler.kBigValueThreshold=int.MaxValue;
Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create
{{Edited|July|12|2024}}
 
[[Category:MDriven Turnkey]]
<nowiki>---------</nowiki>
[[Category:MDriven Server]]
 
Limitations that I condsider to be ok until reality proves otherwise:
# It is only db loaded (old value) that is target for SharedBigValue - thus write/update of large blocks are handled as before - and we do not try to share this.
# We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated - this is considered to not be a common production scenario.
<nowiki>----</nowiki>
 
Ways to test: Model with Image and Text, run Turnkey with two different users, or two different browsers, update large text and image in one - make sure it updates in other
 
<nowiki>----</nowiki>
 
Expected positive effect: Only one instance of large things are held in memory even if 1000 users look at this same thing
 
Expected negative effect: additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable
 
Currently this feature is always on.

Latest revision as of 05:51, 19 March 2024

2023-04-09

When a server process like MDriven Turnkey or MDrivenServer service holds several ecospaces in the same process, we now have a mechanism called SharedBigValue.

What this does is:

  1. If a loaded attribute value is a byte[] or string
  2. Larger than 8192 bytes
  3. Is the maxint in version (latest version)
  4. Shares the same object ID, and attribute ID
  5. Shares typesystem checksum

...if the above is true, the cache will hold a SharedBigValue.

  • All public access methods to get to a cache value will screen for a SharedBigValue - and if found - resolve to a real value and return this.

Only when objects are loaded from PS and hit the ApplyDataBlock method do we consider creating or looking up SharedBigValue.

  • We do this by keeping a static dictionary on the key Cache - SharedBigValue.
  • If the key already exists, we return the existing SharedBigValue - otherwise, we create a SharedBigValue and return it (and store it in the dictionary).

Reading is protected by a ReadLock that can be upgraded to WriteLock if we need to create.

Limitations I consider okay until reality proves otherwise:

  1. It is only DB loaded (old value) that is the target for SharedBigValue - thus, write/update of large blocks are handled as before - and we do not try to share this.
  2. We do not actively destroy SharedBigValue's if a new model is uploaded - changing the checksum - and forcing all existing ecospaces to be recreated. This is considered to be an uncommon production scenario.

Ways to test: Model with Image and Text, run Turnkey with two different users or two different browsers, and update large text and image in one - make sure it updates in the other.

Expected positive effect: Only one instance of large things is held in memory even if 1000 users look at this same thing.

Expected negative effect: Additional overhead for large texts and byte arrays but kept low by checks above - I do not expect it to be noticeable.

Currently, this feature is always on. You can stop it from having an effect by setting:

FetchedBlockHandler.kBigValueThreshold=int.MaxValue;
This page was edited 53 days ago on 03/19/2024. What links here