Wednesday, September 5, 2012

O/R Modelling part 5: Nested Data Structures, Do's and Don'ts

One of the promises of object-relational modelling is the ability to address information modelling on complex and nested data structures.

Nested data structures bring considerable richness to the database, which is lost in a pure, flat, relational model.  Nested data structures can be used to model tuple constraints in ways that are impossible to do when looking at flat data structures, at least as long as those constraints are limited to the information in a single tuple.  At the same time there are cases where they simplify things and cases where they complicate things.  This is true both in the case of using these for storage and for interfacing with stored procedures.

PostgreSQL allows for nested tuples to be stored in a database, and for arrays of tuples.  Other ORDBMS's allow something similar (Informix, DB2, and Oracle all support nested tables).

Nested tables in PostgreSQL provide a number of gotchas, and additionally exposing the data in them to relational queries takes some extra work.  In this post we will look at modelling general ledger transactions using a nested table approach, and both the benefits and limitations of this approach.  In general this trades one set of problems for another and it is important to recognize the problems going in.

The storage example came out of a brainstorming session I had with Marc Balmer of  Micro Systems, though it is worth noting that this is not the solution they use in their products, nor is it the approach currently used by LedgerSMB.

Basic Table Structure:

The basic data schema will end up looking like this:

CREATE TABLE journal_type (
    id serial not null unique,
    label text primary key
);

CREATE TABLE account (
   id serial not null unique,
   control_code text primary key, -- account number
   description text
);

CREATE TYPE journal_line_type AS (
    account_id int,
    amount numeric
); 

CREATE TABLE journal_entry (
    id serial not null unique,
    journal_type int references journal_type(id),
    source_document_id text,-- for example invoice number
    date_posted date not null,
    description text, 
    line_items journal_line_type[],
    PRIMARY KEY (journal_type, source_document_id)
);

This schema has a number of obvious gotchas and cannot, by itself, guarantee the sorts of things we want to do.  However, using object-relational modelling we can fix these in ways that cannot do in a purely relational schema.  The main problems are:

  • First, since this is a double entry model, we need a constraint that says that the sum of the amounts of the lines must always equal zero.  However, if we just add a sum() aggregate, we will end up with it summing every record in the db every time we do an insert, which is not what we want.  We also want to make sure that no account_id's are null and no amounts are null.
  • Additionally it is not possible in the schema above to easily expose the journal line information to purely relational tools.  However we can use a VIEW to do this, though this produces yet more problems.
  • Finally referential integrity enforcement between the account lines and accounts cannot be done declaratively.  We will have to create TRIGGERs to enforce this manually.
These problems are traded off against the fact that the relational model does not allow for the first problem to be solved at all so we trade off the fact that we have some solutions which are a bit of a pain for the fact that we have some solutions at all.

Nested Table Constraints

If we simply had a tuple as a column, we could look inside the tuple with check constraints.  Something like check((column).subcolumn is not null).  However in this case we cannot do that because we need to aggregate on a set of tuples attached to the row.  To do this instead we create a set of table methods for managing the constraints:

CREATE OR REPLACE FUNCTION is_balanced(journal_entry) 
RETURNS BOOL
LANGUAGE SQL AS $$

SELECT sum(amount) = 0 FROM unnest($1.line_items);

$$;

CREATE OR REPLACE FUNCTION has_no_null_account_ids(journal_entry)
RETURNS BOOL
LANGUAGE SQL AS $$

SELECT bool_and(account_id is not null) FROM unnest($1.line_items);

$$;

CREATE OR REPLACE FUNCTION has_no_null_amounts(journal_entry)
RETURNS BOOL
LANGUAGE SQL AS $$

select bool_and(amount is not null) from unnest($1.line_items);

$$;

We can then create our constraints.  Note that because we have to create the methods first, we have to add our constraints after the functions are defined, and these are added after the table is constructed.  I have gone ahead and given these friendly names so that errors are easier for people (and machines) to process and handle.

ALTER TABLE journal_entry 
ADD CONSTRAINT is_balanced 
CHECK ((journal_entry).is_balanced);

ALTER TABLE journal_entry
ADD CONSTRAINT has_no_null_account_ids
CHECK ((journal_entry).has_no_null_account_ids);

ALTER TABLE  journal_entry
ADD CONSTRAINT has_no_null_amounts
CHECK ((journal_entry).has_no_null_amounts);

Now we have integrity constraints reaching into our nested data.

So let's test this out.

 insert into journal_type (label) values ('General');

We will re-use the account data from the previous post:

or_examples=# select * from account;
 id | control_code | description
----+--------------+-------------
  1 | 1500         | Inventory
  2 | 4500         | Sales
  3 | 5500         | Purchase
(3 rows)

Let's try inserting a few meaningless transactions, some of which violate our constraints:

insert into journal_entry
(journal_type, source_document_id, date_posted, description, line_items)
values 
(1, 'ref-10001', now()::date, 'This is a test',
ARRAY[row(1, 100)::journal_line_type]);

ERROR:  new row for relation "journal_entry" violates check constraint "is_balanced"

So far so good.

insert into journal_entry
(journal_type, source_document_id, date_posted, description, line_items)
values 
(1, 'ref-10001', now()::date, 'This is a test',
ARRAY[row(1, 100)::journal_line_type, 
      row(null, -100)::journal_line_type]);

 ERROR:  new row for relation "journal_entry" violates check constraint "has_no_null_account_ids"

Still good.

insert into journal_entry
(journal_type, source_document_id, date_posted, description, line_items)
values 
(1, 'ref-10001', now()::date, 'This is a test',
ARRAY[row(1, 100)::journal_line_type,
      row(2, -100)::journal_line_type,
      row(3, NULL)::journal_line_type])

ERROR:  new row for relation "journal_entry" violates check constraint "has_no_null_amounts"

Great.  All constraints working properly.  Let's try inserting a valid row:

insert into journal_entry
(journal_type, source_document_id, date_posted, description, line_items)
values
(1, 'ref-10001', now()::date, 'This is a test',
ARRAY[row(1, 100)::journal_line_type,
      row(2, -100)::journal_line_type]);
And it works!

 or_examples=# select * from journal_entry;
 id | journal_type | source_document_id | date_posted |  description   |       li
ne_items       
----+--------------+--------------------+-------------+----------------+---------
---------------
  5 |            1 | ref-10001          | 2012-08-23  | This is a test | {"(1,100
)","(2,-100)"}
(1 row)

Break-Out Views

A second major problem that we will be facing with this schema is that if someone wants to create a report using a reporting tool that only really supports relational data very well, then the financial data will be opaque and not available.  This scenario is one of the reasons why I think it is important generally to push the relational model to its breaking point before looking at object-relational functions.  Consequently I think when doing nested tables it is important to ensure that the data in them is available through a relational interface, in this case, a view.

In this case, we may want to model debits and credits in a way which is re-usable, so we will start by creating two type methods:

CREATE OR REPLACE FUNCTION debits(journal_line_type) 
RETURNS NUMERIC
LANGUAGE SQL AS
$$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 
               ELSE NULL END 
$$;

CREATE OR REPLACE FUNCTION credits(journal_line_type)
RETURNS NUMERIC 
LANGUAGE SQL AS
$$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount 
               ELSE NULL END
$$;

Now we can use these as virtual columns anywhere a journal_line_type is used.

The view definition itself is rather convoluted and this may impact performance.  I am waiting for the LATERAL construct to become available which will make this easier.

CREATE VIEW journal_line_items AS
SELECT id AS journal_entry_id, (li).*, (li).debits, (li).credits
FROM (SELECT je.*, unnest(line_items) li 
        FROM journal_entry je) j;

Testing this out:

SELECT * FROM journal_line_items;

gives us

 journal_entry_id | account_id | amount | debits | credits
------------------+------------+--------+--------+---------
                5 |          1 |    100 |        |     100
                5 |          2 |   -100 |    100 |       
                6 |          1 |    200 |        |     200
                6 |          3 |   -200 |    200 |    

As you can see, this works.  Now people with purely relational tools can access the information in the nested table.

In general it is almost always worth creating break-out views of this sort where nested data is stored.

Referential Integrity Controls

The final problem is that relational integrity is not a well defined concept for nested data.  For this reason, if we value relational integrity and foreign keys are involved, we must find ways of enforcing these.

The simplest solution is a trigger which runs on insert, update, or delete, and manages another relation which can be used as a proxy for relational integrity checks.

For example, we could:

CREATE TABLE je_account (
   je_id int references journal_entry (id),
   account_id int references account(id),
   primary key (je_id, account_id)
);

This will be a very narrow table and so should be quick to search.  It may also be useful in determining which accounts to look at for transactions if we need to do that.  This table could then be used to optimize queries.

To maintain the table we need to recognize that never ever will a journal entry's line items be updated or deleted.  This is due to the need to maintain clear audit controls and trails.  We may add other flags to the table to indicate transactions but we can handle insert, update, and delete conditions with a trigger, namely:

CREATE FUNCTION je_ri_management()
RETURNS TRIGGER
LANGUAGE PLPGSQL AS $$
DECLARE accounts int[];
BEGIN
    IF TG_OP ILIKE 'INSERT' THEN
         INSERT INTO je_account (je_id, account_id)
         SELECT NEW.id, account_id
         FROM unnest(NEW.line_items)
         GROUP BY account_id;

         RETURN NEW;
    ELSIF TG_OP ILIKE 'UPDATE' THEN
         IF NEW.line_items <> OLD.line_items THEN
             RAISE EXCEPTION 'Cannot journal entry line items!';
         ELSE RETURN NEW;
         END IF;
    ELSIF TG_OP ILIKE 'DELETE' THEN
         RAISE EXCEPTION 'Cannot delete journal entries!';
    ELSE
         RAISE EXCEPTION 'Invalid TG_OP in trigger';
    END IF;
END; $$;

Then we add the trigger with:

CREATE TRIGGER je_breakout_for_ri
AFTER INSERT OR UPDATE OR DELETE
ON journal_entry
FOR EACH ROW EXECUTE PROCEDURE je_ri_management(); 

The final invalid TG_OP could be omitted but this is not a bad check to have.

Let's try this out:

insert into journal_entry
(journal_type, source_document_id, date_posted, description, line_items)
values
(1, 'ref-10003', now()::date, 'This is a test',
ARRAY[row(1, 200)::journal_line_type,
      row(3, -200)::journal_line_type]);

or_examples=# select * from je_account;
 je_id | account_id
-------+------------
    10 |          3
    10 |          1
(2 rows)

In this way referential integrity can be enforced.

Solution 2.0:  Refactoring the above to eliminate the view.

The above solution will work great for small businesses but for larger businesses, querying this data will become slow for certain kinds of reports.   Storage here is tied to a specific criteria, and indexing is somewhat problematic.  There are ways we can address this, but they are not always optimal.  At the same time our work is simplified because the actual accounting details are append-only.

One solution to this is to refactor the above solution.  Instead of:
  1. Main table
  2. Relational view
  3. Materialized view for referential integrity checking
we can have:
  1. Main table, with tweaked storage for line items
  2. Materialized view for RI checking and relational access
Unfortunately this sort of refactoring after the fact isn't simple.  Typically you want to convert the journal_line_type type to a journal_line_type table, and inherit this in your materialized view table.  You cannot simply drop and recreate since the column you are storing the data in is dependent on the structure.

The solution is to rename the type, create a new one in its place.  This must be done manually and there is no current capability to copy a composite type's structure into a table.  You will then need to create a cast and a cast function.  Then, when you can afford the downtime, you will want to convert the table to the new type.  It is quite possible that the downtime will be delayed and you will have an extended time period where you are half-way through migrating the structure of your database.  You can, however, decide to create a cast between the table and the type, perhaps an implicit one (though this is not inherited) and use this to centralize your logic.  Unfortunately this leads to duplication-related complexity and in an ideal world would be avoided.

However, assuming that the downtime ends up being tolerable, the resulting structures will end up such that they can be more readily optimized for a variety of workloads.   In this regard you would have a main table, most likely with line_items moved to extended storage, whose function is to model journal entries as journal entries and apply relevant constraints, and a second table which models journal entry lines as independent lines.  This also simplifies some of the constraint issues on the first table, and makes the modelling easier because we only have to look into the nested storage where we are looking at subset constraints.

This section then provides a warning regarding the use of advanced ORDBMS functionality, namely that it is easy to get tunnel vision and create problems for the future.  The complexity cost here is so high, that the primary model should generally remain relational, with things like nested storage primarily used to create constraints that cannot be effectively modelled otherwise.  However, this becomes a great deal more complicated where values may be update or deleted.  Here, however, we have a relatively simple case regarding data writes combined with complex constraints that cannot be effectively expressed in normalized, relational SQL.  Therefore the standard maintenance concerns that counsel against duplicating information may give way to the fact that such duplication allows for richer constraints.

Now, if we had been aware of the problems going in we would have chosen this structure all along.  Our design would have been:

CREATE TYPE journal_line AS (
    entry_id bigserial primary key, --only possible key
    je_id int not null,
    account_id int,
    amount numeric
);  

After creating the journal entry table we'd:

ALTER TABLE journal_line ADD FOREIGN KEY (je_id) REFERENCES journal_entry(id);

If we have to handle purging old data we can make that key ON DELETE CASCADE.

And the lines would have been of this type instead.  We can then get rid of all constraints and their supporting functions other than the is_balanced one.  Our debit and credit functions then also reference this type.  Our trigger then looks like:

CREATE FUNCTION je_ri_management()
RETURNS TRIGGER
LANGUAGE PLPGSQL AS $$
DECLARE accounts int[];
BEGIN
    IF TG_OP ILIKE 'INSERT' THEN
         INSERT INTO journal_line (je_id, account_id, amount)
         SELECT NEW.id, account_id, amount
         FROM unnest(NEW.line_items);

         RETURN NEW;
    ELSIF TG_OP ILIKE 'UPDATE' THEN
          RAISE EXCEPTION 'Cannot journal entry line items!';
    ELSIF TG_OP ILIKE 'DELETE' THEN
         RAISE EXCEPTION 'Cannot delete journal entries!';
    ELSE
         RAISE EXCEPTION 'Invalid TG_OP in trigger';
    END IF;
END; $$;

Approval workflows can be handled with a separate status table with its own constraints.  Deletions of old information (up to a specific snapshot) can be handled by a stored procedure which is unit tested and disables this trigger before purging data.  This system has the advantage of having several small components which are all complete and easily understood, and it is made possible because the data is exclusively append-only.

As you can see from the above examples, nested data structures greatly complicate the data model and create problems with relational math that must be addressed if data logic will remain meaningful. This is a complex field, and it adds a lot of complexity to storage.  In general, these are best avoided in actual data storage except where this approach makes formerly insurmountable problems manageable.  Moreover, they add complexity to optimization once data gets large.  Thus while non-atomic fields in this regard make sense as an initial point of entry in some narrow cases, as a point of actual query, they are very rarely the right approaches.  It is possible that, at some point, nested storage will be able to have its own indexes, foreign keys, etc. but I cannot imagine this being a high priority and so it isn't clear that this will ever happen.  In general, it usually makes the most sense to simply store the data in a pseudo-normalized way, with any non-1NF designs being the initial point of entry in a linear write model.

Nested Data Structures as Interfaces

Nested data structures as interfaces to stored procedures are a little more manageable.  The main difficulties are in application-side data construction and output parsing.  Some languages handle this more easily than others.

Upper-level construction and handling of these structures is relatively straight-forward on the database-side and poses none of these problems.   However, they do cause additional complexity and this must be managed carefully.

The biggest issue when interfacing with an application is that ROW types are not usually automatically constructed by application-level frameworks even if they have arrays.  This leaves the programmer to choose between unstructured text arrays which are fundamentally non-discoverable (and thus brittle), and arrays of tuples which are discoverable but require a lot of additional application code to handle.  At the same time as a chicken and egg problem, frameworks will not add handling for this sort of problem unless people are already trying to do it.

So my general recommendation is to use nested data types everywhere in the database sparingly, only where the benefits clearly outweigh the complexity costs.

Complexity costs are certainly lower in the interface level and there are many more cases where it these techniques are net wins there, but that does not mean that they should be routinely used even there.

1 comment:

  1. I think our audience at DZone would like to read this post - they respond well to NoSQL-related content. Let me know if you're interested in having this republished with a link back to the original and attribution to you as the original author. egenesky@dzone.com

    ReplyDelete