PHP Scaling : Load Balancers


Load Balancers
All the incoming and Outgoing requests needs to be going through the Load balancers. This is required to fairly distribute the incoming requests across the different application servers.
There are two important type of the load balancers.
  •  Hardware Load Balancers
  •  Software Load balancers
Issues with Hardware Load Balancers:
1. COST
2. It will be always acting as a black box, not knowing what exactly goes on inside.
It is always recommend to have multiple Load Balancers, so that it will not be a single point of failure. Although you need only 2 load balancers, add one more extra server.
HAPRoxy:
It is written in C, event based and require only low profile hardware. It requires only low CPU/Memory Footprints. This can handle 20-30,000 connections per seconds.
It is written only to be Reverse Proxy.
Layer 4 vs Layer 7( TCP vs HTTP)
HAPRoxy supports both the TCP and HTTP Load balancing. If used with TCP, it will be forwarding only the TCP packets. This will take less CPU/memory. But it will not parse the http headers before forwarding. Nginx server will be using only the HTTP layer.
SSL Termination:
If we are using HTTPS, We need to decide on where the SSL termination happens. If the SSL termination happens in the application server, HTTP headers cannot be parsed in Load balancer. If the SSL termiantion happens in the Load balancer it will increase the CPU utilization in Load balancer. It is advisable to have this in the application servers.
Better Health checks and distribution algorithm:
Ngnix server will be always use the timeout option for health checks. If the appln server times out, it will be removed from the list of application servers. Further Ngnix supports only the Round robin method for request distribution.
HAPROxy Distribution Algorithms:
1. Round Robin
2. Least Connections
3. Source : Sticky sessions
4. URI: Hashing based on client ip address.
Choosing Hardware for Load Balancers:
HAProxy always uses the single process mode. With this powerful single CPU machines are more preferable than the multi-core machines.
Automatic Failover with keepalived:
keepalived is a daemon used for monitoring the other servers and if it is not responding, it will steal the ip address.
Fine Tuning Linux for handling huge connections:
1. net.core.somaxconn:
This will define the max queue size of the kernel for accepting new connections. By default it will be 128, if we need more connections to be handled this needs to be increased.
2. net.ipv4.ip_local_port_range:
Defines the range of usable ports on your system. The number of the ports opened will increase the number of the connections.
3. net.ipv4.tcp_tw_reuse
In TCP protocal if the connection ends, it will still hold the connection will the TIME_WAIT reported is completed. Only after that connection will be closed. On a busy server, this can cause the issues with running out of ports/sockets. This will tell the server to reuse the TCP sockets when it is safe to do so.
4. Ulimit -n 99999
This will set the max limit for the number of the open files in the os.
Debugging Load Balancer for Slowness or unresponsiveness:
Saturating the network:
Network card configuration needs to be verified. If the network card is configured for 150 mbps, it cannot pass 200mbps.  We need to check both the private and public network.
Running Out of Memory:
Check the memory taken by the HAProxy using the ‘free’ command.
Lot of connections in TIME_WAIT:
Having lot of connections in the TIME_WAIT will lead to the slowness of the load balancers.

PHP Session Handling in Clustered environment

Sessions are important part of web application as we need to store the user information/data between the two requests. Default application server config is that, it will store the server information on the server with files and in set client browser with encryption.

Problem with Storing session in File System:
1. Large IO operation time
2. We need to use sticky session in the server cluster and  this is create problems in scalability.
Other option is storing session in relation database. Session is a small piece of temp information that keeps on changing an relational db might not be the correction option as we need lot of IO costly operations to retrieve/set information.
PHP has an option to set the session handler plugin in the PHP.ini file
Setting session handlers in PHP.ini
Memcache:
ini_set(‘session.save_handler’,’memcached’);
ini_set(‘session.save_path’,’192.16.32.112:11211,192.16.32.113:11211′);
Fastest way to scale is setup the memcache server and use that for session handling.
Issues with Memcache:
In-memory data store that will not persist any of the information in  file. If the system crashes all the information will be  gone. But memcache will be extremely fast.
Redis:
Redis will store the information in data server and persist in the file system.
ini_set(‘session.save_handler’,’redis’);
ini_set(‘session.save_path’,’tcp://192.16.32.112:11211?timeout=0.5,tcp://192.16.32.113:11211?timeout=0.5′);
Another way of implementing session is through the cookie. session information is stored in the browser cookie.
Issues:
1. Cookie information is sent back and forth for every HTTP call. For every server call this information is sent and retrieved. This will involve a huge bandwidth consumption if cookie size is big.
2. Some of the browsers has max size limit set on the cookie.
3. Cookie information is visible at client side using the tools like firebug and across the wire if you are not using   https.
4. If you use Hash Based Message Authentication Code to sign the cookie data, it will not be changed.
Cookie session handling is not built into the PHP but using the SessionHandlerInterface, we can do this.

Lazy Initialization and Eager Initialization of Singleton pattern

Lazy initialization
This method uses double-checked locking, which should not be used prior to J2SE 5.0, as it is vulnerable to subtle bugs. The problem is that an out-of-order write may allow the instance reference to be returned before the Singleton constructor is executed.[8]
public class Singleton {
        private static volatile Singleton instance = null;
 
        private Singleton() {   }
 
        public static Singleton getInstance() {
                if (instance == null) {
                        synchronized (Singleton.class){
                                if (instance == null) {
                                        instance = new Singleton();
                                }
                      }
                }
                return instance;
        }
}
 
Eager initialization
If the program will always need an instance, or if the cost of creating the instance is not too large in terms of time/resources, the programmer can switch to eager initialization, which always creates an instance:
public class Singleton {
    private static final Singleton instance = new Singleton();
 
    private Singleton() {}
 
    public static Singleton getInstance() {
        return instance;
    }
}
This method has a number of advantages:
  • The instance is not constructed until the class is used.
  • There is no need to synchronize the getInstance() method, meaning all threads will see the same instance and no (expensive) locking is required.
  • The final keyword means that the instance cannot be redefined, ensuring that one (and only one) instance ever exists.
 
This method also has some drawbacks:
If the program uses the class, but perhaps not the singleton instance itself, then you may want to switch to lazy initialization.

JAVA HEAP SPACE AND PERM GEN

The heap stores all of the objects created by your Java program. The heap’s contents is monitored by the garbage collector, which frees memory from the heap when you stop using an object (i.e. when there are no more references to the object.
 
This is in contrast with the stack, which stores primitive types like ints and chars, and are typically local variables and function return values. These are not garbage collected.
 
The perm space refers to a special part of the heap. 

PHP GEARMAN : Best Way to handle the CPU intensive BackGround jobs

Best Way to handle the CPU intensive jobs:
 
E.g. youtube user uploading the video. We get the content/data from the user, process and send a response to the client.
 
Basic Architecture:
Have a message Queue. Client thread will add the data to the message queue and poll the queue for the job status. All the client thread will add the data to the same Queue. At the end of the queue we can have the worker to send the notification to the ui or mail to the client.
 
PHP syntax : ignore_user_abort(true) => Script should be aborted when the response is send to the client or not.
 
 
PHP GEARMAN
 
Generic web application framework for farming out the work into multiple machine/processes.
 
  • written in C
  • Support multi-threaded
  • Persistent queues.
  • No single point of failure.
 
Client : Create a job and send it to the Job server
Worker : Register with the job server and get a job and process it.
Job Server : Co-ordinate the job from the client to worker and  handle restarts.
 
GEARMAN application architecture:
                                                                     
Application Client =====>|  GearMan Client API ===> GearMan job server ===>GearMan Worker API | ===> Application Worker
                                                                       
                                                                        
Sample Client :
$client = new GearmanClient();
$client->addServer();
print $client->do(“reverse”,’Hello world”);=> run the job in foreground (synchronous)
print $client->doBAckGround(“reverse”,’Hello world”);=> run the job in background (asynchronous)
 
 
Sample Worker :
 
$worker = new GearManWorker();
$worker->addServer(“172.16.33.12”,6322);
$worker->addFunction(“reverse”,”my_reverse_function”);
while($worker->work());
 
function my_reverse_function($job){
            return strrev($job->workload());
}
 
 
Running :
gearmand -d
Shell$ php worker.php &
17510
 
Shell $ php client.php
!dlrow olleh
 
Gearman support distributed processing with synchronous and asynchronous queues.
 
Gearman : job vs tasks
job is task. But task is not job.e.g checking job status is a task but not job
client submit a tasks
worker process a job
 
Concurrent task API:
Queue the jobs
Callback function on the specific events
No promise in the order in which job is processed.
 
Thread Model:
Specify the thread count in the -t parameter while starting
By default it is single threaded.
 
libevent is an asynchronous event notification software library
 
Currently there are three type of thread in gearman
1. Listening and management threads: Listen for the incoming connections and assign it to the IO threads and manage the server coming up.
2. The I/O thread is responsible for doing the read and write system calls on the sockets and initial packet parsing. Once the packet has been parsed it it put into an asynchronous queue for the processing thread
3. The processing thread should have no system calls within it (except for the occasional brk() for more memory), and manages the various lists and hash tables used for tracking unique keys, job handles, functions, and job queues.
 
Queues: Inside the Gearman job server, all job queues are stored in memory-> when the server is restarted all the jobs will be gone
Support queues only for the background jobs
uses libdrizzle
The persistent queue is only enabled for background jobs because foreground jobs have an attached client. If a job server goes away, the client can detect this and restart the foreground job somewhere else (or report an error back to the original caller). Background jobs on the other hand have no attached client and are simply expected to be run when submitted.

PHP: Things that needs to be taken care in cluster mode

1. Source code should be same in all the environments.
2. If each node in the cluster needs to  be pointed to different resources( like db or filer) don’t make changes in the server specific code. Instead use the same code in all the server and the environment variables.
3. DB clustering most easiest way :
Master-Slave system (one INSERT – all Select). All insert is maintenance nightmare( request routing and keeping all the databases in sync)
4. NoSQL is another solution too…
5. Cluster Deployment options:  This work must be automated. PHING, rsync, pull/push actions in a distributed revision control (e.g. mercurial) or maybe a simple bash script can be enough.
6. If your application is protected with any authentication mechanism you must ensure all nodes will be able to authenticate against the same database. A good pattern is to work with an external authentication mechanism such as OAuth in a separate server. But if it’s not possible for you must consider where is located user/password database.

PHP Application code changes to use Db cluster

Things we need to take care while running the php in cluster mode:
1. Cache : Don’t assume all the cache will be there in the local server e.g. APCache -> will fail in cluster
2. PHP Session: We cannot store the php session in the single file system.
3. Any single machine dependent code needs to be updated. Like running the background job etc.
PHP DB Master – Slave setup:
Advantage:
We have redundancy & data isolation. DB server load is shared across many machines.
 
In the Master Slave setup, usually for the write operation, we go to the Master and for read we go to the load balanced slave. All the write operations in the master needs to be updated in the slaves periodically. Until that time for all the updated data Read call should go for the master and for other Read call should go to the slave.=>Slave Lag.
 
Multi-Slave,One Master:
Approach 1: For every webserver have one sql slave. All the slaves talk to one master.
 
For 3 web servers, we might need 20 sql slaves. or vice-versa. Based on your application. If your application is logic intensive it takes more web server, Read intensive might take more db server.
 
Usual requirements for db server Machine : Faster IO
Usual requirements for PHP slave : Faster CPU.
 
Sample Code:
<?php
 
class DB {
    // Configuration information:
    private static $user = ‘testUser’;
    private static $pass = ‘testPass’;
    private static $config = array(
        ‘write’ =>
            array(‘mysql:dbname=MyDB;host=10.1.2.3’),
        ‘read’ =>
            array(‘mysql:dbname=MyDB;host=10.1.2.7’,
                  ‘mysql:dbname=MyDB;host=10.1.2.8’,
                  ‘mysql:dbname=MyDB;host=10.1.2.9’)
        );
 
    // Static method to return a database connection:
    public static function getConnection($server) {
        // First make a copy of the server array so we can modify it
        $servers = self::$config[$server];
        $connection = false;
       
        // Keep trying to make a connection:
        while (!$connection && count($servers)) {
            $key = array_rand($servers);
            try {
                $connection = new PDO($servers[$key],
                                    self::$user, self::$pass);
            } catch (PDOException $e) {}
           
            if (!$connection) {
                // We couldn’t connect.  Remove this server:
                unset($servers[$key]);
            }
        }
       
        // If we never connected to any database, throw an exception:
        if (!$connection) {
            throw new Exception(“Failed: {$server} database”);
        }
       
        return $connection;
    }
}
 
// Do some work
 
$read = DB::getConnection(‘read’);
$write = DB::getConnection(‘write’);
 
. . .
 
?>
Database Pooling:=> Selective scaling
 
Virtually dividing all the db slaves in such away that each being used for one specific functionality like comments, blog, batch processing etc. This is done mainly for the isolation of the high load. For example Db server that serve the Home page need more slave than the number of the slaves for the batch processing.
 
<?php
 
class DB {
    // Configuration information:
    private static $user = ‘testUser’;
    private static $pass = ‘testPass’;
    private static $config = array(
        ‘write’ =>
            array(‘mysql:dbname=MyDB;host=10.1.2.3’),
        ‘primary’ =>
            array(‘mysql:dbname=MyDB;host=10.1.2.7’,
                  ‘mysql:dbname=MyDB;host=10.1.2.8’,
                  ‘mysql:dbname=MyDB;host=10.1.2.9’),
        ‘batch’ =>
            array(‘mysql:dbname=MyDB;host=10.1.2.12’),
        ‘comments’ =>
            array(‘mysql:dbname=MyDB;host=10.1.2.27’,
                  ‘mysql:dbname=MyDB;host=10.1.2.28’),
        );
 
    // Static method to return a database connection to a certain pool
    public static function getConnection($pool) {
        // Make a copy of the server array, to modify as we go:
        $servers = self::$config[$pool];
        $connection = false;
       
        // Keep trying to make a connection:
        while (!$connection && count($servers)) {
            $key = array_rand($servers);
            try {
                $connection = new PDO($servers[$key],
                    self::$user, self::$pass);
            } catch (PDOException $e) {}
           
            if (!$connection) {
                // Couldn’t connect to this server, so remove it:
                unset($servers[$key]);
            }
        }
       
        // If we never connected to any database, throw an exception:
        if (!$connection) {
            throw new Exception(“Failed Pool: {$pool}”);
        }
       
        return $connection;
    }
}
// Do something Comment related
$comments = DB::getConnection(‘comments’);
. . .
 
?>