What is a Scalable Business Logic Platform? Business logic in any form (i.e., process logic flow, business rules, operational decisions, etc.). is the core IP of your business, by which it should be managed and executed independently from your application. But how can you build a scalable business logic platform that allows you to scale up and down as you require? It can be anywhere, on-prem or cloud (i.e., AWS, Microsoft Azure, etc.).
The model to deploy can be any business logic, not only the business rules. In this example we used our sample decision:
Let’s say we want to deploy the above Decision Requirements Diagram or DRD, and expose that as a service with REST API endpoint (i.e., Decision as a Service).
You may ask, why REST API? Well, the benefit of REST API for business logic is that now any technology or device can easily communicate with it.
FlexRule Server allows you to deploy your models in a matter of a few clicks. No script or code. And you have many deployment model options. FlexRule Server uses a Master-Agent architecture. Master is responsible for management and dispatching requests to Agents. Agents are responsible for executing logic. This separation allows you to scale easily.
Let’s talk about deployment models, which are defined as how scalable you can be. I don’t go through all the options, but to give you an idea of the concept, we discuss two of them here:
This strategy fits well when you are small and have no worries about scalability. You set up a server with which your application can communicate.
As you can see below, in the agent section there is only one node:
Clients send a request to your server Master node, it executes this and returns the result. In a Scalable Business Logic platform, you should have an option to change your deployment strategy as needed.
Master with Distributed Agents
If you want to scale, then you need to have an option to throw in more processing power and let the server process more requests. In that case, you can setup (or upgrade to) a Master-Agent deployment strategy.
In this model, as you want to process more, you spin off more Agents, and let them process the client’s requests. Now when you want to have more processing power, simply install more Agents:
In this example, we have added a new Agent (e.g., Agent1) which is sitting on another network’s node and is waiting to receive requests and process them.
Although in this example we set up Master to execute logic, it is better to let the Master do the management and distribution of requests only. Then Master will have more resources freed up and can dispatch the requests to Agents quicker.
All of these must be transparent from both clients and your application code. The client communicates to the Master’s public API address and the platform takes care of the rest. Also, your platform should allow you to take advantage of the cloud (e.g., Azure, AWS, etc.). For instance, you can deploy to Azure and take advantage of Virtual Machine Scale Sets for scaling scenarios of the agents.
Whatever you do, your business logic is now in your control and scalable. Deploy your logic as a service and share this across all your applications and processes with any technology stack you’ve got.