Scaling MDrivenServer for multiple reasons
No edit summary
No edit summary
Line 1: Line 1:
As your Turnkey or other MDriven project grows in popularity you may want to scale out to multiple servers. Even if the load of a particular server is not the reason, regional distance might be. With distance comes latency and latency kills the joy of almost any system.
As your Turnkey or other MDriven projects grow in popularity, you may want to scale out to multiple servers - even if the load of a particular server is not the reason, the regional distance might be. With distance comes latency and latency kills the joy of almost any system.


To work around load and latency we now offer a distribution mechanisms in MDrivenServer.
To work around load and latency, we now offer a distribution mechanism in MDrivenServer.


The strategy behind the MDrivenServer distribution is based on the assumption that data reads are much more common than data writes. Luckily this is the case in all administrative applications I can come to think of.
The strategy behind the MDrivenServer distribution is based on the assumption that data reads are much more common than data writes. Luckily, this is the case in all administrative applications I can think of.


Distribution normally pose a couple of problems:
Distribution normally poses a couple of problems:
# There is a tradeoff between distribution and consistency – meaning if you have the same data in multiple places far apart you cannot always be sure it is exactly equal given a point in time.
# There is a tradeoff between distribution and consistency – meaning if you have the same data in multiple places that are far apart, you cannot always be sure it is exactly equal given a point in time.
# A distributed solution is harder to maintain because there are more nodes involved in the solution.
# A distributed solution is harder to maintain because there are more nodes involved in the solution.
MDrivenServer solves both these problems in a rather elegant way.
MDrivenServer solves both these problems elegantly.


== The number 1 issue; consistency ==
== The Number One Issue: Consistency ==
The number 1 issue is a fact of physics and we just need to make sure that we minimize the suffering due to it. MDrivenServer solves this by appointing only 1 MDrivenServer as Master. It is only the Master that actually receives data writes from users. All MDriverServer Slaves are connected to this Master and when users of a Slave is saving data the MDrivenServer Slave route that datablock to the its Master. Once the Master accepts the message the Slave asks the Master if there are any news ready for distribution. The Master then answers with the newly commited datablocks and the Slave merge these to its own database.
The number one issue is a fact of physics. We need to make sure that we minimize the suffering due to it. MDrivenServer solves this by appointing only 1 MDrivenServer as Master. The Master is the only one that receives data writes from users. All MDriverServer Slaves are connected to this Master and when users of a Slave are saving data, the MDrivenServer Slave routes that data block to its Master. Once the Master accepts the message, the Slave asks the Master if there is any news ready for distribution. The Master then answers with the newly committed data blocks and the Slave merges these into its own database.


The whole operation is done in the matter of split seconds. The Slave continuously poll the Master for changes – so given enough time and clear internet pathways the Slave will have the same content as the Master.
The whole operation is done in a split second. The Slave continuously polls the Master for changes. Given enough time and clear internet pathways, the Slave will have the same content as the Master.


In the event of network failure so that the Slave cannot reach the Master – then users will not be able to save data and the Slave will not get updated data from the Master. This will all rectify itself once the Slave can reach the Master again.
In the event of network failure such that the Slave cannot reach the Master, users will not be able to save data and the Slave will not get updated data from the Master. This will all rectify itself once the Slave can reach the Master again.


It is always the Slave that initiates the communication. The Master just accepts save requests and deliver committed updates.
It is always the Slave that initiates the communication. The Master just accepts save requests and delivers committed updates.


MDrivenServers that participate in this cluster all need an extra table added to their database. The table make sure that the commitblocks that has been applied to the Master is backed up along the rest of the data in the database. The transaction updating the business data is also used for the commitblock information. This way the Master is consistent even when restored from a backup.
MDrivenServers that participate in this cluster need to add an extra table to their database. The table makes sure that the commit-blocks that have been applied to the Master are backed up along with the rest of the data in the database. The transaction updating the business data is also used for the commit-block information. This way the Master is consistent even when restored from a backup.


Since the extra table structure is equal for the Master and Slave(s) it is easy to use a backup of a Master-db to deploy it as a new Slave.
Since the extra table structure is equal for the Master and Slave(s), it is easy to use a backup of a Master-DB to deploy it as a new Slave.


== The number 2 issue; maintenance ==
== The Number Two Issue: Maintenance ==
Your system is constantly evolved – and the model is changed on a monthly, weekly or even daily basis. The data in one given commitblock is created in the context of one model version only. This means that Slaves cannot be allowed to have a different model than the Master. As you may have a Slave in every part of the world you can imagine the administrative work to keep this cluster configured correctly.
Your system is constantly evolving. The model is changed on a monthly, weekly, or even daily basis. The data in one given commit-block is created in the context of one model version only. This means that Slaves are not allowed to have a different model from that of the Master. As you may have a Slave in every part of the world, you can imagine the administrative work it takes to keep this cluster configured correctly.


To manage this we write a checksum of the current model along with each commitblock we persist in the Master. As a Slave receives newly created commitblocks in may discover that the model checksum used to create the commitblock differs from the model the slave currently use.
To manage this, we write a checksum of the current model along with each commit-block we persist in the Master. As a Slave receives newly created commit-blocks, it may discover that the model checksum used to create the commit-block differs from the model the slave currently uses.


If the Slave discovers that it uses the wrong model it asks the Master for the model needed – using the model checksum passed along the commitblock. The Master then returns this model to the Slave and the Slave engage in an automatic evolve of its database. Once the evolve is fulfilled the commitblock can be safely applied.
If the Slave discovers that it uses the wrong model, it asks the Master for the model it needs, using the model checksum passed along the commit-block. The Master then returns this model to the Slave and the Slave engages in an automatic evolution of its database. Once the evolve is fulfilled, the commit-block can be safely applied.


The described mechanism makes it possible for you to sit back and deploy to one Master node and trust that Slaves you have distributed over the world or stacked next to each other to handle load will be maintained automatically.
The described mechanism makes it possible for you to sit back and deploy to one Master node and trust that the Slaves you have distributed all over the world or stacked next to each other to handle the load will be maintained automatically.


<blockquote>'''''<u>Q:</u>'' <u>Can this be used with my own server built with the MDriven Framework? Are there any reusable components that I can use?</u>'''</blockquote>The main thing is the ability to catch the commitblocks used for SyncHandler inside the db transaction that will apply it:
<blockquote>'''''<u>Q:</u>'' <u>Can this be used with my own server built with the MDriven Framework? Are there any reusable components that I can use?</u>'''</blockquote>The main thing is the ability to catch the commit-blocks used for SyncHandler inside the DB transaction that will apply it:
  var res = new Eco.Persistence.SyncHandler();
  var res = new Eco.Persistence.SyncHandler();
  res.OnSubmittedCommitBlock += Res_OnSubmittedCommitBlock;
  res.OnSubmittedCommitBlock += Res_OnSubmittedCommitBlock;
Once you are able to do that you can catch the commitblock and save it to the same db – inside the same transaktion:
Once you can do that, you can catch the commit-block and save it to the same DB – inside the same transaction:
  var updP = (e.OperationsParams as TUpdateParameters);
  var updP = (e.OperationsParams as TUpdateParameters);
  if (updP != null)
  if (updP != null)
Line 46: Line 46:
query.AssignSqlText(“insert into MDrivenServerSynk (Time,CommitBlock,modelid,slaveMergeTime) values (:inserttime,:CommitBlock,:modelid,:slaveMergeTime) “);
query.AssignSqlText(“insert into MDrivenServerSynk (Time,CommitBlock,modelid,slaveMergeTime) values (:inserttime,:CommitBlock,:modelid,:slaveMergeTime) “);


And once you can do that you have your slaves copy these commitblocks and apply them to their own db.
Once you can do that, you have your Slaves copy these commit-blocks and apply them to their own DB.


The other thing is to be able to catch all updating calls to the slave and reroute those to the master instead. A commitblock will be created and applied by the well behaving slave.
The other thing is to be able to catch all updating calls to the Slave and reroute those to the Master instead. A commit-block will be created and applied by the well-behaving slave.


The third thing is to have the Slave discovering Master model changes and auto-evolve so that the commitblocks are applied to the correct environment at all times
The third thing is to have the Slave discover Master model changes and auto-evolve so that the commit-blocks are applied to the correct environment at all times.


MDrivenServer does these 3 things for you.
'''MDrivenServer does these three things for you.'''


[[Category:MDriven Server]]
[[Category:MDriven Server]]
[[Category:Background talk]]
[[Category:Background talk]]

Revision as of 06:56, 8 February 2023

As your Turnkey or other MDriven projects grow in popularity, you may want to scale out to multiple servers - even if the load of a particular server is not the reason, the regional distance might be. With distance comes latency and latency kills the joy of almost any system.

To work around load and latency, we now offer a distribution mechanism in MDrivenServer.

The strategy behind the MDrivenServer distribution is based on the assumption that data reads are much more common than data writes. Luckily, this is the case in all administrative applications I can think of.

Distribution normally poses a couple of problems:

  1. There is a tradeoff between distribution and consistency – meaning if you have the same data in multiple places that are far apart, you cannot always be sure it is exactly equal given a point in time.
  2. A distributed solution is harder to maintain because there are more nodes involved in the solution.

MDrivenServer solves both these problems elegantly.

The Number One Issue: Consistency

The number one issue is a fact of physics. We need to make sure that we minimize the suffering due to it. MDrivenServer solves this by appointing only 1 MDrivenServer as Master. The Master is the only one that receives data writes from users. All MDriverServer Slaves are connected to this Master and when users of a Slave are saving data, the MDrivenServer Slave routes that data block to its Master. Once the Master accepts the message, the Slave asks the Master if there is any news ready for distribution. The Master then answers with the newly committed data blocks and the Slave merges these into its own database.

The whole operation is done in a split second. The Slave continuously polls the Master for changes. Given enough time and clear internet pathways, the Slave will have the same content as the Master.

In the event of network failure such that the Slave cannot reach the Master, users will not be able to save data and the Slave will not get updated data from the Master. This will all rectify itself once the Slave can reach the Master again.

It is always the Slave that initiates the communication. The Master just accepts save requests and delivers committed updates.

MDrivenServers that participate in this cluster need to add an extra table to their database. The table makes sure that the commit-blocks that have been applied to the Master are backed up along with the rest of the data in the database. The transaction updating the business data is also used for the commit-block information. This way the Master is consistent even when restored from a backup.

Since the extra table structure is equal for the Master and Slave(s), it is easy to use a backup of a Master-DB to deploy it as a new Slave.

The Number Two Issue: Maintenance

Your system is constantly evolving. The model is changed on a monthly, weekly, or even daily basis. The data in one given commit-block is created in the context of one model version only. This means that Slaves are not allowed to have a different model from that of the Master. As you may have a Slave in every part of the world, you can imagine the administrative work it takes to keep this cluster configured correctly.

To manage this, we write a checksum of the current model along with each commit-block we persist in the Master. As a Slave receives newly created commit-blocks, it may discover that the model checksum used to create the commit-block differs from the model the slave currently uses.

If the Slave discovers that it uses the wrong model, it asks the Master for the model it needs, using the model checksum passed along the commit-block. The Master then returns this model to the Slave and the Slave engages in an automatic evolution of its database. Once the evolve is fulfilled, the commit-block can be safely applied.

The described mechanism makes it possible for you to sit back and deploy to one Master node and trust that the Slaves you have distributed all over the world or stacked next to each other to handle the load will be maintained automatically.

Q: Can this be used with my own server built with the MDriven Framework? Are there any reusable components that I can use?

The main thing is the ability to catch the commit-blocks used for SyncHandler inside the DB transaction that will apply it:

var res = new Eco.Persistence.SyncHandler();
res.OnSubmittedCommitBlock += Res_OnSubmittedCommitBlock;

Once you can do that, you can catch the commit-block and save it to the same DB – inside the same transaction:

var updP = (e.OperationsParams as TUpdateParameters);
if (updP != null)
{
foreach (IDatabase db in updP.Databases)
{
var query = db.GetExecQuery();
try
{

query.AssignSqlText(“insert into MDrivenServerSynk (Time,CommitBlock,modelid,slaveMergeTime) values (:inserttime,:CommitBlock,:modelid,:slaveMergeTime) “);

Once you can do that, you have your Slaves copy these commit-blocks and apply them to their own DB.

The other thing is to be able to catch all updating calls to the Slave and reroute those to the Master instead. A commit-block will be created and applied by the well-behaving slave.

The third thing is to have the Slave discover Master model changes and auto-evolve so that the commit-blocks are applied to the correct environment at all times.

MDrivenServer does these three things for you.

This page was edited 90 days ago on 02/10/2024. What links here