How to load balance Microsoft Dynamics using HAProxy

How to load balance Microsoft Dynamics using HAProxy

Microsoft Published on 3 mins Last updated

Microsoft Dynamics is a critical ERP and CRM tool, used every day by businesses all over the world. The application therefore needs to be highly available and scalable. Here's how you can achieve that.

Microsoft Dynamics history time!

Microsoft Dynamics has been around since the late 90's in the ERP and CRM space, and has been owned by several big names (IBM, Damgaard Data, Navision) until Microsoft acquired it in 2000.

It is used in various industries such as retail, services, manufacturing, financial services, and the public sector. Since its initial release it has evolved multiple times with new modules being added to the suite of applications, and has been renamed several times. Currently it can be easily integrated with other Microsoft tools or services that are essential to the business.

Say no more! How can I load balance it?

The short story:- Microsoft Dynamics CRM needs to be deployed in a multi-tenant type of deployment on Microsoft Server 2016, while the load balancing solution HAProxy needs to be deployed on CentOS 7.

Deploying Microsoft Dynamics on Windows Server

I won't go into the ins-and-outs of how to install Microsoft Dynamics - I will leave it to Microsoft's ever-so-extensive library on this:

Hit an issue?

Ask the expert

HAProxy deployment and configuration

For deployment instructions and configuration for HAProxy check out this blog.  

Now that HAProxy has been successfully installed, here is the configuration file  /etc/haproxy/haproxy.cfg that will be used to load balance Microsoft Dynamics.

global
        daemon
        log                127.0.0.1 local2
        chroot             /var/lib/haproxy
        pidfile        /var/run/haproxy.pid
        maxconn                        4000
        user                        haproxy
        group                       haproxy
        stats socket /var/lib/haproxy/stats

defaults
        mode                           http
        log                          global
        option                       tcplog
        option                  dontlognull
        retries                           3
        maxconn                       10000
        option                   redispatch
        timeout                  connect 5s
        timeout                   client 8h
        timeout                   server 8h

listen stats
        bind *:8080
        mode http
        option forwardfor
        option httpclose
        stats enable
        stats show-legends
        stats refresh 5s
        stats uri /stats
        stats realm Haproxy\ Statistics
        stats auth loadbalancer:loadbalancer
        stats admin if TRUE

listen DynamicsClusterForceToHTTPS
        mode http
        acl force src 192.168.77.10 192.168.77.100
        reqadd X-Forwarded-Proto:\ https if force
        redirect scheme https code 301 if !force
        
listen DynamicsCluster
        bind 192.168.1.100:443
        mode tcp
        option tcplog
        balance leastconn
        stick on src
        stick-table type ip size 10240k expire 8h
        option tcp-check
        tcp-check connect 443
        server win2016-dynamics-rip1 192.168.1.200 inter 10s rise 2 fall 3
        server win2016-dynamics-rip2 192.168.1.201 inter 10s rise 2 fall 3

To ensure that a client will not end the session prematurely, the connection will time out between the client and the frontend in 8 hours client timeout 8h and from frontend to backend in 8 hours server timeout 8h. This could be considered a bit of an over estimation, however I have encountered deployments that had this requirement.

The HTTP traffic will be forced to HTTPS using DynamicsClusterForceToHTTPS to the frontend  DynamicsCluster . Source IP persistence (session stickiness) stick on src will ensure the clients stickiness to one of the backends and the entry will be stored in persistence table for 8 hours stick-table type ip size 10240k expire 8h, essentially for the duration of a work day.

The connections will be distributed between the two applications servers using the Least Weighted Connections load balancing algorithm balance leastconn, the real server with the least connections will be prioritized for new connections.

For the configuration above, HAProxy does SSL bridging (or SSL passthrough), thus the SSL connection is stripped on application servers. Of course the configuration can be expanded and we can speed up the application servers by taking some workload off them i.e. terminating the SSL connection on the load balancer.

And there you have it. For further details on how to accomplish this, check out our Client Certificate Authentication with HAProxy blog.

Interested in our solution

Want to see our load balancer in action?