Clustering – how it work

Chapter 6
Clustering
6.1 How it Works
A XMPP domain is served by one or more
ejabberd
nodes. These nodes can be run on di erent
machines that are connected via a network. They all must have the ability to connect to port 4369
of all another nodes, and must have the same magic cookie (see Erlang/OTP documentation,
in other words the le
~ejabberd/.erlang.cookie
must be the same on all nodes). This is
needed because all nodes exchange information about connected users, s2s connections, registered
services, etc. . .
Each
ejabberd
node has the following modules:
router,
local router,
session manager,
s2s manager.
6.1.1 Router
This module is the main router of XMPP packets on each node. It routes them based on their
destination’s domains. It uses a global routing table. The domain of the packet’s destination is
searched in the routing table, and if it is found, the packet is routed to the appropriate process.
If not, it is sent to the s2s manager.
6.1.2 Local Router
This module routes packets which have a destination domain equal to one of this server’s host
names. If the destination JID has a non-empty user part, it is routed to the session manager,
otherwise it is processed depending on its content.
117





118
6
.
Clustering
6.1.3 Session Manager
This module routes packets to local users. It looks up to which user resource a packet must be
sent via a presence table. Then the packet is either routed to the appropriate c2s process, or
stored in oine storage, or bounced back.
6.1.4 s2s Manager
This module routes packets to other XMPP servers. First, it checks if an opened s2s connection
from the domain of the packet’s source to the domain of the packet’s destination exists. If that
is the case, the s2s manager routes the packet to the process serving this connection, otherwise
a new connection is opened.
6.2 Clustering Setup
Suppose you already con gured
ejabberd
on one machine named (
first
), and you need to setup
another one to make an
ejabberd
cluster. Then do following steps:
1.
Copy
~ejabberd/.erlang.cookie
le from
first
to
second
.
(alt) You can also add `
-setcookie content_of_.erlang.cookie
‘ option to all `
erl
‘ com-
mands below.
2.
On
second
run the following command as the
ejabberd
daemon user, in the working
directory of
ejabberd
:
erl -sname ejabberd \
-mnesia dir ‘”/var/lib/ejabberd/”‘ \
-mnesia extra_db_nodes “[‘ejabberd@first’]” \
-s mnesia
This will start Mnesia serving the same database as
ejabberd@first
. You can check this
by running the command `
mnesia:info().
‘. You should see a lot of remote tables and a
line like the following:
Note: the Mnesia directory may be di erent in your system. To know where does ejabberd
expect Mnesia to be installed by default, call
4.1
without options and it will show some
help, including the Mnesia database spool dir.
running db nodes = [ejabberd@first, ejabberd@second]
3.
Now run the following in the same `
erl
‘ session:
mnesia:change_table_copy_type(schema, node(), disc_copies).

6.3
Service Load-Balancing 119
This will create local disc storage for the database.
(alt) Change storage type of the
scheme
table to `RAM and disc copy’ on the second node
via the Web Admin.
4.
Now you can add replicas of various tables to this node with `
mnesia:add_table_copy
or `
mnesia:change_table_copy_type
‘ as above (just replace `
schema
‘ with another table
name and `
disc_copies
‘ can be replaced with `
ram_copies
‘ or `
disc_only_copies
‘).
Which tables to replicate is very dependant on your needs, you can get some hints from
the command `
mnesia:info().
‘, by looking at the size of tables and the default storage
type for each table on ‘ rst’.
Replicating a table makes lookups in this table faster on this node. Writing, on the other
hand, will be slower. And of course if machine with one of the replicas is down, other
replicas will be used.
Also section 5.3 (Table Fragmentation) of Mnesia User’s Guide
1
can be helpful.
(alt) Same as in previous item, but for other tables.
5.
Run `
init:stop().
‘ or just `
q().
‘ to exit from the Erlang shell. This probably can take
some time if Mnesia has not yet transfered and processed all data it needed from
first
.
6.
Now run
ejabberd
on
second
with a con guration similar as on
first
: you probably do
not need to duplicate `
acl
‘ and `
access
‘ options because they will be taken from
first
;
and
mod_irc
should be enabled only on one machine in the cluster.
You can repeat these steps for other machines supposed to serve this domain.
6.3 Service Load-Balancing
6.3.1 Domain Load-Balancing Algorithm
ejabberd
includes an algorithm to load balance the components that are plugged on an
ejabberd
cluster. It means that you can plug one or several instances of the same component on each
ejabberd
cluster and that the trac will be automatically distributed.
The default distribution algorithm try to deliver to a local instance of a component. If several
local instances are available, one instance is chosen randomly. If no instance is available locally,
one instance is chosen randomly among the remote component instances.
If you need a di erent behaviour, you can change the load balancing behaviour with the option
domain
balancing
. The syntax of the option is the following:
{domain
balancing, “component.example.com”, BalancingCriteria}.
Several balancing criteria are available:
1
http://www.erlang.org/doc/apps/mnesia/Mnesia
chap5.html#5.3

Add a Comment

Your email address will not be published. Required fields are marked *