Our choice on architecture was guided by the following requirements:
- Configuration and building of virtual machines should be subject to automation without human intervention, with full integration of payment frameworks and the like.
- Configuration and building of virtual machines should be such that virtual machines can be fully rebuilt in the event of disaster recovery requirements.
- Configuration changes should be able to be automated and retriable.
- This can be specific to hosted clouds for specific applications (we only host LedgerSMB as an ERP solution).
The Basic Structure and Role of PostgreSQL
Our approach is relatively simple. Data comes in through either an administrative or customer portal, transmitted to a limited API which then goes into our configuration database. Information can be requests for new virtual machines, configuration changes and the like. Additionally payment notifications can come in through these interfaces as well.
PostgreSQL is then attached to the configuration system which picks up notifications of needed configuration changes and orchestrates these on the system. This also allows us to pull information on our service deployments into our financial system for billing purposes (we use LedgerSMB beta versions of 1.4 internally, eating our own dogfood so to speak).
In this regard PostgreSQL acts as an information backplane. It allows our software to talk to eachother and allows messages to be sent between components with both transitory and permanent information being recorded in the database for later record-keeping (transitory information can be periodically truncated).
The system is still developing with various components coming together. Nonetheless, the idea is that the customer or administrator enters information into a front-end tool, which, through a limited API, inserts the data into the database.
On database commit, triggers are fired which queue a message for reading by the configuration system. We use pg_message_queue for this, in part because it supports both NOTIFY and periodic polling and we intend to add much better multiple listener support as we need it.
From there the listener gets information about what portions of the system need to be changed, makes the changes, and on success commits the transaction that dequeued the notification. On a failure, a system alert is raised and the system goes on to the next request (the item returned back to the queue for later processing on the next polling cycle).
What this means is that to a large extent this is a hands-off system. We provide configuration options, customers select them, and once every aspect is running, the customer can control the software configuration of their vm's within limits, but without root access. We do offer root access but only if the customer is willing to set up the ssl key and certificate (we can't give root access if our wildcard cert is on the vm!).