Bug #3421

Pool size settings

Added by Stratos Psomadakis about 11 years ago. Updated almost 11 years ago.

Status:Feedback Start date:03/08/2013
Priority:High Due date:
Assignee:Georgios Tsoukalas % Done:

0%

Category:- Spent time: -
Target version:0.13.0

Description

The default DB pool size settings don't work and lead to a dead lock under load (concurrent reqs). Since for each connection we need 2 db reqs, the DB pool size should be at least as large as twice the apache max clients settings (max_clients: 50 -> db pool size: 100).

We must also set the http conn pool to a 'sane' value. Atm, on okeanos.io, http conn pool size is 50 (apache max_clients, db_pool_size / 2). See also #3419 (about configurable http pool size).

(the issue is easily reproduced with only one worker, default setttings, and concurrent reqs/curls)


Related issues

related to Synnefo - Bug #3420: kamaki pool initialization New 03/08/2013

Associated revisions

Revision b336e6fa
Added by Georgios D. Tsoukalas about 11 years ago

Fix+move HTTP quotaholder client in synnefo.lib

Allow per-service configuration of the (http) quotaholder client.
Kamaki is no longer needed in service (or ganeti) nodes,
because the client has been moved to snf-common.

Also fix the default quotaholder settings for pithos backend to be disabled
by default, and don't initialize quotaholder client when not needed.
This fixes crashes of non-user-facing pithos backend uses such as
pithcat from snf-image.

Refs #3421

History

#1 Updated by Georgios Tsoukalas almost 11 years ago

  • Status changed from New to Feedback
  • Target version changed from 0.14.0 to 0.13.0

The current practice is to (down)scale pool sizes and (up)scale the max db connections
so that the total database connections possible through the pools are somewhat less
than what the database allows.

Also available in: Atom PDF