Feature #1925

Active-active RabbitMQ installation modularization

Added by Christos KK Loverdos about 12 years ago. Updated almost 12 years ago.

Status:Closed Start date:01/24/2012
Priority:Medium Due date:
Assignee:Giorgos Gousios % Done:

100%

Category:- Spent time: -
Target version:v0.2

Description

Setup an active-active RabbitMQ installation and provide relevant documentation.

History

#1 Updated by Giorgos Gousios about 12 years ago

  • % Done changed from 0 to 90

Notes for developers:
--------------------

A few notes on developing with RabbitMQ in high availability mode. The examples
are in Python using the Pika library.

-Make sure that all exchanges are declared as durable

chan.exchange_declare(exchange = "log",
type="topic", durable=True, auto_delete=False)

-Make sure that all queues are declared as durable, non exclusive and their
argument hash contains the 'x-ha-policy': 'all' entry in order for the
queue to be distributed across cluster nodes.

chan.queue_declare(queue="log", durable=True,
exclusive=False, auto_delete=False,
callback=self.on_queue_declared,
arguments = {'x-ha-policy': 'all'})

-Make sure that clients know of the connection details for all cluster
nodes. If a node fails (its connection dies) connect to the next
available node, and retry the last operation. This means that nodes need
to keep track of the operation(s) they have spawned to RabbitMQ. For
client libs that support it (to the best of my knowledge, Pika doesn't),
the client should also catch the consumer cancelation notifications.

http://www.rabbitmq.com/extensions.html#consumer-cancel-notify

Notes for sysadmins:
--------------------
The following work with RabbitMQ v2.7.1 on two Okeanos VMs with internal
networking (10.0.0.* range)

RabbitMQ Debian repo:
deb http://www.rabbitmq.com/debian/ testing main

More detailed instructions, including automated setup at:
http://www.rabbitmq.com/clustering.html#auto-config

Bind to IPv4:
For some reason, on my setup, RabbitMQ insists on binding to IPv6. You
can disable this behaviour by setting the following to
/etc/rabbitmq/rabbitmq-env.conf

NODENAME=rabbit@nodename (<-- snf-51 or snf-459 in the following examples)
NODE_IP_ADDRESS=0.0.0.0

Initial nodes:
snf-51.vm.okeanos.grnet.gr (10.0.0.2) (cluster master)
snf-459.vm.okeanos.grnet.gr (10.0.0.3) (slave/master candidate)

-Add entries to /etc/hosts so that both can be pinged from each other with
just the host name

-Open ports 5672 and 4369

-Stop rabbitmq, copy the value of /var/lib/rabbitmq/.erlang.cookie on the
master node to the same file on all slaves, restart rabbitmq

-Add a user with the same username/password on both nodes. This in actually
not necessary for the cluster, but it will be necessary for clients connecting
to either of the cluster nodes.

-Make sure that the nodes can connect to each other, e.g.
on snf-51: rabbitmqctl -n rabbit@snf-459 status
on snf-459: rabbitmqctl -n rabbit@snf-51 status

-To connect them, run on the slave the following:
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl cluster rabbit@snf-51 rabbit@snf-459
rabbitmqctl start_app

-Make sure the cluster is setup

rabbitmqctl cluster_status

#2 Updated by Giorgos Gousios about 12 years ago

  • Assignee set to Giorgos Gousios
  • Target version set to v0.2
  • % Done changed from 90 to 100

Instructions provided above, installation working on dev71/dev72 on the development cluster

#3 Updated by Christos KK Loverdos almost 12 years ago

  • Status changed from New to Closed

Also available in: Atom PDF