One of the key features of LedgerSMB 1.3 has been the move from the custom templating solution of LedgerSMB 1.2 (inherited from SQL-Ledger) to Template Toolkit. This gives us more flexibility, better cross-language support since TT has been implemented for other languages as well, and the like. However in practice in order to preserve these and avoid too close of a merging between business and presentation logic, we avoid some functions of TT, such as embedded Perl. We also make have some specific approaches to how we invoke TT in order to avoid problems and help ensure standards compliance to the best of our ability.
The big problem that one runs into is the use of percents, which denote comments in LaTeX. Leaving the percent signs in leads to parsing problems when testing templates. So it became clear that since we wanted to re-use the same template logic for both UI and printed invoices the percent signs would have to go.
We settled on opening the tag with <?lsmb and closing it with ?> The reason here is that this re-uses the SGML-standard "program instruction" tag syntax to ensure that validation can be run on the templates themselves.
A second thing we did was to add some template-specific functions. These handle internationalization and number formatting. Each of these has two forms: a general form specific to the language or preferences of the template and a non-specific form, where the localization settings must be specified in the template. This provides a consistent templating API for presentation of data, without mixing object behavior with the template layer itself.
The reason for these two forms is that there may be cases where number formatting must be specified in the template (such as when a check is printed), but there are others where user-specific settings should be used. Similarly we want to specify a primary language with a template, but might want to translate strings into a second language (for example for bilingual receipts).
This gives us a general presentation API largely based on TT, but with differences which enable us to do whatever we need to do. We had to rip out the old pagebreak handling and go instead to LaTeX longtables instead, which is probably a win anyway.
However one surprising use has come up, and that has been the ability to re-use the templating system in preparing database upgrade scripts which are then passed along to psql.
This blog tracks development of the open source accounting and ERP software LedgerSMB. I also offer some perspectives on PostgreSQL including new features which we may find useful. Brought to you by Metatron Technology Consulting.
Sunday, December 18, 2011
Saturday, December 3, 2011
MYGOSSCON Observations, and Notes on Open Source in South-East Asia
This week, I attended the Malaysian Government Open Source Software Convention (MYGOSSCON), promoting PostgreSQL and LedgerSMB to both public and private sector entities. We had a simple booth and had approx 100 people stop by over the course of the two days. We also talked with a wide range of other exhibitors and came away with a sense that we can expect great things in South-East Asia regarding Open Source.
It was immediately clear that PostgreSQL was a star technology at the conference, with a strong showing by EnterpriseDB and EnterpriseDB resellers. We were the only project specializing in the "community edition" as EnterpriseDB calls it. We had a lot of people stop by to talk about PostgreSQL and share success stories, answering questions about the differences between Oracle and PostgreSQL, between PostgreSQL and MySQL, and between EnterpriseDB and the official release version of PostgreSQL (I refuse to call it the "community version"). I was told that there would be few who knew databases, but while there were only a few database experts at the conference (which was to be expected) there were a lot of people who had a solid working knowledge of database systems generally and had lots of well-thought-out and valid questions.
Indeed people would see the other signs at our booth and keep walking... until they saw the big yellow banner with the blue, white, and black PostgreSQL logo on it and then walk right over. We got questions that I knew the answer to, questions where my answers were slightly out of date (areas of Pg vs Oracle), and questions which I had not really prepared for at all (asking about Arabic support for PostgreSQL). Unfortunately with the internet connections overloaded at the conference, I had to do the best I could, and promise follow-up as necessary.
One typically goes to trade shows expecting to build awareness for one's products and services with the idea that this will grow the market later. In this case, however, we walked away with companies interested in hosting LedgerSMB, possible maintainers for other language translations, and a couple of actual, actionable sales leads. We were extremely happy with the way things went.
However in addition to talking with civil servants and educators, an important undercurrent emerged in my discussions with purely private sector individuals (often other exhibitors): open source is rapidly growing here in South-East Asia, particularly in the public sector. The major limiting factor appears to be supply if individuals with relevant expertise and experience. This is not to say that such people aren't available, but rather that demand significantly exceeds supply which is common in growing markets. This isn't just the sorts of things we experienced first hand, caused by entities trying to reduce the amount that they spend primarily to Oracle, but instead part of a much larger trend, with governments in the region.
As I typically try to point out when advocating open source, the decision to prioritize open source isn't just about money, but rather the differences in dynamics surrounding money lead to a number of important benefits including a larger percentage of the money to launch a project going into the local economy, and the added flexibility that occurs when one does not need to worry about acquisition of software licenses in order to put data to a new use. This leads to local power both economically and in terms of decision-making. According to other consultants, this is not limited to the Malaysian government and other governments in the region are increasing their commitment to open source software.
All around South-East Asia, my sense is that there is a sense that open source will be a major economic asset to the region and something to be cultivated. It is a good time to be in open source over here, but there has never been a better time (and probably never will be a better time) to be in open source databases. PostgreSQL is, in particular, poised to start penetrating Oracle's home market, with projects like Gridsql and Postgres-xc reaching maturity. I believe we are to the point where no corner of the market is safe any more, and where the adoption of an open source lifestyle for businesses and governments will become the norm rather than the exception.
I think that last sentence follows quite directly from open source database systems coming of age. The large, multi-application database systems (DB2, Oracle, MS SQL, Terradata) represent the last major piece of IT infrastructure for which open source solutions were inadequate. I say multi-application to differentiate from the "app-stack" approaches familiar to LAMP developers. Typically here we are talking about heavily engineered databases designed to have a significant number of applications solving different issues, manipulating their data. However, as this piece of infrastructure begins to fall to open source, then all the back-end software can be open source or in-house. At this point, one has to ask whether one should simply bite the bullet and start migrating the front-end (desktop/laptop) devices as well. Pushing Linux-based desktops in government and business would almost certainly have a cascading effect to consumer systems as well.
These are exciting times indeed.
It was immediately clear that PostgreSQL was a star technology at the conference, with a strong showing by EnterpriseDB and EnterpriseDB resellers. We were the only project specializing in the "community edition" as EnterpriseDB calls it. We had a lot of people stop by to talk about PostgreSQL and share success stories, answering questions about the differences between Oracle and PostgreSQL, between PostgreSQL and MySQL, and between EnterpriseDB and the official release version of PostgreSQL (I refuse to call it the "community version"). I was told that there would be few who knew databases, but while there were only a few database experts at the conference (which was to be expected) there were a lot of people who had a solid working knowledge of database systems generally and had lots of well-thought-out and valid questions.
Indeed people would see the other signs at our booth and keep walking... until they saw the big yellow banner with the blue, white, and black PostgreSQL logo on it and then walk right over. We got questions that I knew the answer to, questions where my answers were slightly out of date (areas of Pg vs Oracle), and questions which I had not really prepared for at all (asking about Arabic support for PostgreSQL). Unfortunately with the internet connections overloaded at the conference, I had to do the best I could, and promise follow-up as necessary.
One typically goes to trade shows expecting to build awareness for one's products and services with the idea that this will grow the market later. In this case, however, we walked away with companies interested in hosting LedgerSMB, possible maintainers for other language translations, and a couple of actual, actionable sales leads. We were extremely happy with the way things went.
However in addition to talking with civil servants and educators, an important undercurrent emerged in my discussions with purely private sector individuals (often other exhibitors): open source is rapidly growing here in South-East Asia, particularly in the public sector. The major limiting factor appears to be supply if individuals with relevant expertise and experience. This is not to say that such people aren't available, but rather that demand significantly exceeds supply which is common in growing markets. This isn't just the sorts of things we experienced first hand, caused by entities trying to reduce the amount that they spend primarily to Oracle, but instead part of a much larger trend, with governments in the region.
As I typically try to point out when advocating open source, the decision to prioritize open source isn't just about money, but rather the differences in dynamics surrounding money lead to a number of important benefits including a larger percentage of the money to launch a project going into the local economy, and the added flexibility that occurs when one does not need to worry about acquisition of software licenses in order to put data to a new use. This leads to local power both economically and in terms of decision-making. According to other consultants, this is not limited to the Malaysian government and other governments in the region are increasing their commitment to open source software.
All around South-East Asia, my sense is that there is a sense that open source will be a major economic asset to the region and something to be cultivated. It is a good time to be in open source over here, but there has never been a better time (and probably never will be a better time) to be in open source databases. PostgreSQL is, in particular, poised to start penetrating Oracle's home market, with projects like Gridsql and Postgres-xc reaching maturity. I believe we are to the point where no corner of the market is safe any more, and where the adoption of an open source lifestyle for businesses and governments will become the norm rather than the exception.
I think that last sentence follows quite directly from open source database systems coming of age. The large, multi-application database systems (DB2, Oracle, MS SQL, Terradata) represent the last major piece of IT infrastructure for which open source solutions were inadequate. I say multi-application to differentiate from the "app-stack" approaches familiar to LAMP developers. Typically here we are talking about heavily engineered databases designed to have a significant number of applications solving different issues, manipulating their data. However, as this piece of infrastructure begins to fall to open source, then all the back-end software can be open source or in-house. At this point, one has to ask whether one should simply bite the bullet and start migrating the front-end (desktop/laptop) devices as well. Pushing Linux-based desktops in government and business would almost certainly have a cascading effect to consumer systems as well.
These are exciting times indeed.
Tuesday, November 22, 2011
LedgerSMB (and PostgreSQL) at MYGOSSCON
Metatron Technology Consulting has rented a booth at the Malaysia Government Open Source Software Convention (MYGOSSCON), 2011. This booth will be used to advocate PostgreSQL and LedgerSMB both to private and public sectors.
Anyone going to this convention should look for us!
Anyone going to this convention should look for us!
Saturday, November 12, 2011
The Case for Intelligent Databases
LedgerSMB, as an ERP and accounting system, has over time begun to move towards an intelligent database design based on PostgreSQL. We believe that this best matches our long-term goals for the project, especially because interoperability is a very important requirement for any ERP system. Read more to see our general case in this area and see what other projects may be able to learn from our experience.
A Short Developer's Definition of Intelligent Databases
An intelligent database is a database encapsulated behind a defined API.
A Longer Description
While most programmers primarily use the database primarily as a place to simply store data, this reduces a database to basically an ACID-compliant data store with some reporting capability. This approach saves the programmer from worrying about advanced database functionality, but it also prevents the database system from being used to do a number of useful tasks that databases can really be leveraged to do.
This post will explore why LedgerSMB is going this direction, and the sorts of functionality (particularly regarding PostgreSQL) that is a part of this plan, what sorts of tradeoffs we see in it, and where we expect to benefit. This is not an article that discusses the details of implementation (see previous posts for that), but rather at a high-level how and why.
The Toolbox
The fundamental idea of encapsulating a database inside a defined API is that usually the application should hit the API and not the underlying relations where the data is stored if at all possible., The API acts as an interface between actual storage/retrieval and the application.
The major aspects of an API can include views and/or stored procedures. We use stored procedures for most things but may include views for some sorts of objects as time goes on.
What do Intelligent Databases Cost?
Many current development environments seek to save the programmer from spending heavy engineering time on a database, particularly in the early stages of a project where requirements can change. The database is relegated to a simple data store with ACID compliance. Frameworks like Rails will even generate a database schema from programming data structures.
If one goes with an intelligent design for the database, one cannot use agile programming methodologies to design the database. Instead the database requires some real engineering in its own right. This takes some time and effort in its own right and can slow down development compared to ORM-based code generators such as we see in Rails.
Such engineering requires taking a clear look at what data is expected to be collected, what internal structure it has, designing the database storage first and foremost around that analysis,and then building an API based on how it will be used. For simple schema development this doesn't take a long time but the complexity does grow with the data collected.
In the LedgerSMB project this is not the end of the world since we are re-engineering the database incrementally and so each major branch includes at least a few areas where old db structures have been pulled out and new, better engineered ones added. These areas then have an API and the application generally hits the API not the relations. Old code areas may hit the API or the relations.
Other projects may find that this adds significant development time beyond what is reasonable, but these must still be weighed against the benefits below.
What benefits do Intelligent Databases Provide?
Intelligent databases offer a number of specific benefits that LedgerSMB tries to capitalize on. The fact that these designs better match database theory is not enough. tangible benefits were required for us to reach this decision.
Some of these benefits speed up time troubleshooting and debugging code, partly offsetting the additional investment mentioned above. Others provide compelling features to third party developers who may want to integrate our software with theirs.
The single most important benefit is that of clarity. First, with very few exceptions, LedgerSMB tries to keep to one language per file. This means that database queries are in .sql files, and Perl is in .pl and .pm files. The modules which do query mapping are the only Perl modules that contain both Perl and SQL. In this way if I want to know what a query does, I can go to the relevant SQL file and read it. Using features like named inputs in PL/PGSQL, even 100 line queries can be remarkably readable. The API is also well defined, allowing us to troubleshoot issues with the query writer.
Clarity also means that performance problems can more easily be reviewed and addressed without getting into messy questions of ORMs and how they generate queries. The queries that produce a specified result set from an API call can be defined and the actual function written out in SQL.
Along with clarity, we get the ability to do as much as possible with declarative constraints. This provides, we believe, greater robustness in the application and therefore better software in the long run.
The second important benefit is in the area of integration. Because the API is defined, there are specified integration points that other applications can use to integrate without worrying as much about ensuring that corner cases are properly covered. This makes integration more robust. Moreover important areas like LISTEN and NOTIFY allow other applications to integrate in ways approximating message queues, and the like.
In fact, the integration benefit was the driving force behind the decision to move to an intelligent database design. We wanted to open up the application to add-ons written in other languages (Python at the time, but others could supported without too much effort).
Addressing Separation of Concerns
We have all heard horror stories of what happens when you put everything in the database. We have also heard of developers afraid to put anything in the database because of fear of going down the roads of these other horror stories. The secret to success here is in fact to separate concerns and to separate them well. Many lines are ultimately arbitrary in detail but bright in outlines and so they require a team dedicated to making decisions carefully rather than drawing bold lines and mindlessly following them.
Of course that's a long way of saying you can't get away with sloppy engineering and have to actually put some thought into things.
Other Options
There are other options for application development. However, they all presume that the application has intimate knowledge of the internals of the database. I am somewhat puzzled why programmers who are so careful to avoid modules understanding the internals of eachother are so careless about encapsulation in the database.
The main other options are use of an ORM, which typically involves designing the database around the ORM and the data model for the application, and thus the RDBMS becomes basically an extension of the object model. This limits interoperability, and reduces the role that the database has relative to interoperability with other applications. This being said there isn't necessarily a problem with using VIEWs and ORMs together to provide this sort of abstraction.
A second possibility is to use a dedicated store, like many NoSQL engines offer where the database really is a store for objects, and where the "schema" (in scare quotes because NoSQL usually doesn't really have a schema concept) is just a direct extension of the object model itself.
The tradeoff with NoSQL is similar to using an ORM and building your database around that ORM, but more extreme. Typically you see even more rapid development early on, combined possibly with additional scalability in some cases, but with the tradeoff of interoperability is more severe, and ad hoc reporting is generally not practical outside the relational model.
Because of our desire to focus on interoperability and a reliance on good, flexible reporting, these options were not options for our project. They might be for others though.
Conclusions
LedgerSMB has benefited greatly benefited from the decision to move towards an intelligent database layout. Reasons include our focus on interoperability, and our willingness to do engineering up front. Other projects may find their mileage varying.
A Short Developer's Definition of Intelligent Databases
An intelligent database is a database encapsulated behind a defined API.
A Longer Description
While most programmers primarily use the database primarily as a place to simply store data, this reduces a database to basically an ACID-compliant data store with some reporting capability. This approach saves the programmer from worrying about advanced database functionality, but it also prevents the database system from being used to do a number of useful tasks that databases can really be leveraged to do.
This post will explore why LedgerSMB is going this direction, and the sorts of functionality (particularly regarding PostgreSQL) that is a part of this plan, what sorts of tradeoffs we see in it, and where we expect to benefit. This is not an article that discusses the details of implementation (see previous posts for that), but rather at a high-level how and why.
The Toolbox
The fundamental idea of encapsulating a database inside a defined API is that usually the application should hit the API and not the underlying relations where the data is stored if at all possible., The API acts as an interface between actual storage/retrieval and the application.
The major aspects of an API can include views and/or stored procedures. We use stored procedures for most things but may include views for some sorts of objects as time goes on.
What do Intelligent Databases Cost?
Many current development environments seek to save the programmer from spending heavy engineering time on a database, particularly in the early stages of a project where requirements can change. The database is relegated to a simple data store with ACID compliance. Frameworks like Rails will even generate a database schema from programming data structures.
If one goes with an intelligent design for the database, one cannot use agile programming methodologies to design the database. Instead the database requires some real engineering in its own right. This takes some time and effort in its own right and can slow down development compared to ORM-based code generators such as we see in Rails.
Such engineering requires taking a clear look at what data is expected to be collected, what internal structure it has, designing the database storage first and foremost around that analysis,and then building an API based on how it will be used. For simple schema development this doesn't take a long time but the complexity does grow with the data collected.
In the LedgerSMB project this is not the end of the world since we are re-engineering the database incrementally and so each major branch includes at least a few areas where old db structures have been pulled out and new, better engineered ones added. These areas then have an API and the application generally hits the API not the relations. Old code areas may hit the API or the relations.
Other projects may find that this adds significant development time beyond what is reasonable, but these must still be weighed against the benefits below.
What benefits do Intelligent Databases Provide?
Intelligent databases offer a number of specific benefits that LedgerSMB tries to capitalize on. The fact that these designs better match database theory is not enough. tangible benefits were required for us to reach this decision.
Some of these benefits speed up time troubleshooting and debugging code, partly offsetting the additional investment mentioned above. Others provide compelling features to third party developers who may want to integrate our software with theirs.
The single most important benefit is that of clarity. First, with very few exceptions, LedgerSMB tries to keep to one language per file. This means that database queries are in .sql files, and Perl is in .pl and .pm files. The modules which do query mapping are the only Perl modules that contain both Perl and SQL. In this way if I want to know what a query does, I can go to the relevant SQL file and read it. Using features like named inputs in PL/PGSQL, even 100 line queries can be remarkably readable. The API is also well defined, allowing us to troubleshoot issues with the query writer.
Clarity also means that performance problems can more easily be reviewed and addressed without getting into messy questions of ORMs and how they generate queries. The queries that produce a specified result set from an API call can be defined and the actual function written out in SQL.
Along with clarity, we get the ability to do as much as possible with declarative constraints. This provides, we believe, greater robustness in the application and therefore better software in the long run.
The second important benefit is in the area of integration. Because the API is defined, there are specified integration points that other applications can use to integrate without worrying as much about ensuring that corner cases are properly covered. This makes integration more robust. Moreover important areas like LISTEN and NOTIFY allow other applications to integrate in ways approximating message queues, and the like.
In fact, the integration benefit was the driving force behind the decision to move to an intelligent database design. We wanted to open up the application to add-ons written in other languages (Python at the time, but others could supported without too much effort).
Addressing Separation of Concerns
We have all heard horror stories of what happens when you put everything in the database. We have also heard of developers afraid to put anything in the database because of fear of going down the roads of these other horror stories. The secret to success here is in fact to separate concerns and to separate them well. Many lines are ultimately arbitrary in detail but bright in outlines and so they require a team dedicated to making decisions carefully rather than drawing bold lines and mindlessly following them.
Of course that's a long way of saying you can't get away with sloppy engineering and have to actually put some thought into things.
Other Options
There are other options for application development. However, they all presume that the application has intimate knowledge of the internals of the database. I am somewhat puzzled why programmers who are so careful to avoid modules understanding the internals of eachother are so careless about encapsulation in the database.
The main other options are use of an ORM, which typically involves designing the database around the ORM and the data model for the application, and thus the RDBMS becomes basically an extension of the object model. This limits interoperability, and reduces the role that the database has relative to interoperability with other applications. This being said there isn't necessarily a problem with using VIEWs and ORMs together to provide this sort of abstraction.
A second possibility is to use a dedicated store, like many NoSQL engines offer where the database really is a store for objects, and where the "schema" (in scare quotes because NoSQL usually doesn't really have a schema concept) is just a direct extension of the object model itself.
The tradeoff with NoSQL is similar to using an ORM and building your database around that ORM, but more extreme. Typically you see even more rapid development early on, combined possibly with additional scalability in some cases, but with the tradeoff of interoperability is more severe, and ad hoc reporting is generally not practical outside the relational model.
Because of our desire to focus on interoperability and a reliance on good, flexible reporting, these options were not options for our project. They might be for others though.
Conclusions
LedgerSMB has benefited greatly benefited from the decision to move towards an intelligent database layout. Reasons include our focus on interoperability, and our willingness to do engineering up front. Other projects may find their mileage varying.
Saturday, November 5, 2011
Elegance and Simplicity
When it comes to software engineering, I always think back to my first year Calculus teacher, Mike Lavender, and his admonishment that "in mathematics, power tools are consider inelegant where hand tools will do." Words to live by in programming and engineering.
So I'd go so far as to redefine elegance in software programming and engineering as "simplicity perfected." The simple fact is that where requirements change, a well engineered simple solution will usually turn out to be more agile than a heavily engineered, complex one (whether programmed with "agile" methodologies or not).
When one takes the idea of trying to perfect simplicity in the art of programming (and software engineering), it becomes easier to recognize foot-guns and inherent problems, and tradeoffs often become more conscious and controlled.
Wednesday, November 2, 2011
10 Underrated Features of PostgreSQL
Here are a list of features which I think are generally underrated in PostgreSQL and why. Many though not all of these are being used in the development version of LedgerSMB, and more of these will likely be used as time goes on either by the main software or by addons.
These are functions which I greatly miss when I set down and work with many other RDBMS's, or which are fairly new and often underutilized.
10: Language-Specific Full Text Searches
What it is: Language-specific searches on related words.
Why I like it: This function allows for a full text search to query related words in a post. These can be indexed using functional indexes below as long as the system knows which language to search under. So imagine if you put a text field that says "I am going to test this for the customer next week" but when you go back and search you search for "testing for customer" the result will still show up, because PostgreSQL knows that in English, "test" and "testing" can be two forms of the same word.
How I'd use it: Any application which requires full text searches can take advantage of this to provide a much richer experience.
The LedgerSMB budget module currently uses full text searches for the descriptions of budgets. Currently this is locale specific. Future versions will probably allow notes to be in various languages and searched via a full text search per language.
9: Pgcrypto
What it is: Cryptographic extensions to PostgreSQL
Why I like it: With pgcrypto, one can design systems which provide strong protection of data in the database, making the encryption transparent to the application.
How I'd use it: If someone wanted to store, say, credit card data in the database, this could be installed using suitable forms of managed keys such that the user could unlock a key and retrieve the data. This could then be handled such that the application itself would not need to handle encryption or decryption of the data, thus allowing other applications to handle key management and other important tasks.
8: Functional Indexes
What it is: Indexes on the outputs of immutable functions
Why I like it: Functional indexes allow us to store information functionally dependent on information actually stored in the database. Consequently just because we need to search on information. This is available in Oracle and to a far lesser extent in MS SQL and DB2. It is not available in MySQL.
How I'd use it: Functional indexes are behind language-dependent full-text indexing (and hence can make searches there a lot faster). It also has a large number of other uses.
For example, suppose I am storing a large bytea column and I want to search quickly on results. I could index md5(my_bytea) and then search on input = my_bytea and md5(input) = md5(my_bytea). This would allow an index scan to pull out matching md5's and then checking the full result against matching records. For many other databases, one would have to store both the original data and the md5 checksum.
Another use would be to ensure uniqueness based on function output. So for example, we could:
CREATE UNIQUE INDEX u_functional_index ON my_table (lower(my_col));
Then values would be rejected if they are non-unique to a case insensitive search.
7: Partial Indexes
What it is: Indexes on a subset of rows.
Why I like it: Partial indexes allow the development of relatively complex data constraints. For example, one can require a unique constraint on a subset of rows matching a certain criteria, since uniqueness is implemented on the index. Of the other major RDBMS's, only Microsoft SQL Server implements anything like this.
How I'd use it: Partial indexes can be used in two fundamentally different ways: speeding up common search criteria and enforcing partial unique constraints. Many are familiar with speeding up searches in this way, but partial unique indexes are where the real power lies.
So suppose ar and ap invoices share the same physical table, but ar invoices must have unique invoice numbers while ap invoices must be unique per vendor. We can use partial unique indexes to support both these constraints independently.
CREATE UNIQUE INDEX u_ar_invnumber ON invoices (invnumber) WHERE type = 'ar';
A second important application is adding forward-only unique constraints. These are helpful where a new unique constraint needs to be added immediately but not all data in the table conforms to that constraint. As a stop-gap measure (while data is being cleaned up), one might use a partial unique constraint to encourage uniqueness going forward. Something like:
CREATE UNIQUE INDEX u_ar_invnumber ON invoices (invnumber) WHERE type = 'ar' AND id > 12345;
Data can be cleaned up (if possible), then the index can be dropped and recreated with a different where clause.
6: Third Party Solutions
What it is: An extensive set of third party add-ons for PostgreSQL
Why I like it: There's so much to choose from!
A few of my favorites: PGTAP is a test harness for stored procedures, building on other test harness frameworks. It allows you to write production-safe test cases and run them without risking your data. Very very good.
Slony-I is a very good well-developed single-master, asynchronous replication system for PostgreSQL. it was developed to help ensure the .org domain registrar would scale appropriately. Slony-I supports, among other things, replicating between different versions of PostgreSQL, something the native solutions do not.
Speaking of replication, Bucardo allows you to do multi-master and master-slave asynchronous replication as well.
PostGIS provides an excellent basis for GIS applications, and there are many, many other third party, free software add-ons for PostgreSQL Uses are largely limited by the imagination.
5: Language Handles
What it is: Define your own stored procedure languages!
Why I like it: Makes migration often more feasible than it would be otherwise, and exposes much advanced functionality to stored procedure interfaces.
How I'd use it: There are three major uses of stored procedure handles (i.e. to create stored procedures in languages other than C, SQL, and PL/PGSQL).
The first of these is migration. If one has an Oracle database which uses a lot of Java stored procedures you can often port these to PostgreSQL and PL/J with a minimum of headaches. Similarly if you have an MS SQL database with .Net stored procedures you may be able to make these work with PL/Mono.
The second is to make advanced features of a language available to your stored procedures. If you are doing a lot of text processing, PL/Perl may be helpful, but if you are doing a lot of statistical calculations, PL/R may be very helpful.
Finally (and here is the dangerous one), language handles can make it possible to do very un-database-like things from within stored procedures. So for example, you could use PL/SH to execute shell scripts from the stored procedures. However in general for these areas, external scripts and LISTEN/NOTIFY (see below) are better tools.
4: Complex Types
What it is: Tuple-based types, often used as function output types.
Why I like it: These provide an opportunity to create a very rich API within the db. PostgreSQL supports not only pure tuples but tuples which contain arrays of other tuples as their members.
How I'd use it: In LegerSMB, we use complex types to define much of our interface for the application and this dependency will increase over time. These types allow one to represent in SQL form any well-structured representation of data in any other application.
These can then be used to define inputs and outputs of stored procedures.
3: Listen/Notify
What it is: Notification of specified database events.
Why I like it: this provides a notification service within the database. Applications listen for a specified notification handle, and other functions can raise those notifications. This allows one to create message queues (similar to Oracle's Database Notification Services), but that's not all.
Because the notifications are decoupled from low-level database operations, one can raise the notifications based on arbitrary criteria. Suppose we want to raise a notification anytime the postgres user deletes data from the audit trail? We can do that. No problem. Similarly we could raise a notification anytime
How I'd use it: You can use LISTEN/NOTIFY to send a message to a script that information has been dropped in a table and is ready to email out to a decision-maker. For example you could use this to provide real-time emailing when, for example, a part's on-hand value drops below the reorder point. In this way all manner of message queues can be created using the database as a center.
On top of that one can send notifications out around things that do not modify the database. I don't have a specific use case for this yet, but the fact that it is there means that many other things suddenly become possible.
2: Windowing Functions
What it is: The ability to have an aggregate function that aggregates incrementally over a set or portion of a set. Although most RDBMS's these days do support windowing functions (MySQL, DB2, Oracle, and MS SQL all do), this is a feature which is often overlooked.
Why I like it: This allows you to specify aggregates which are run incrementally across a set. This set can be partitioned and ordered independent of any ordering on the final set.
How I'd use it: There are huge numbers of uses for this sort of thing. For example, windowing functions make it possible to select the highest ranked rows within a set by certain criteria. They also allow one to add running totals to financial reports, or add running totals within specific transactions for financial reports.
No doubt there are many many more uses I just haven't thought of or come across. However if you think about being able to use windowing functions the sorts of queries you can write are well expanded.
1: PL/PGSQL
What it is: A very clear, concise, well organized language for stored procedures. Borrowing heavily in approach from Oracle's PL/SQL, this language is a mixture of Ada, PL/I, and SQL. The result, while different from what most programmers are used to, highly readable, easy to follow, and extremely powerful.
Why I like it: PL/PGSQL adds almost everything you could want that vanilla SQL does not offer in a procedural language. Conditionals, loops, and named variables all come to mind. But what it has going for it is conciseness. Often times I hear people say they are "just" doing stored procedures in PL/PGSQL. There is no "just" about it though. This language is very well designed for what it does.
How I'd use it: LedgerSMB currently contains most SQL in the newer code inside stored procedures written in PL/PGSQL. We use it for most procedures with large numbers of arguments or where the control structures are at all helpful. In essence we use it a LOT.
The above list is largely arbitrary. Another author might put in the fact that data definition language calls are usually transaction-safe (meaning you can roll back a transaction that dropped a table), incremental backups, or any number of other features. However the features that I have included above are generally those which allow PostgreSQL to be used as a very extensible database server. It is my view that these features, which are at their best when creating intelligent databases, are frequently under-used especially by small developers because they think of PostgreSQL as merely a data store instead of the very powerful system that it is.
What features do you think are underrated? Comment below to mention them!
These are functions which I greatly miss when I set down and work with many other RDBMS's, or which are fairly new and often underutilized.
10: Language-Specific Full Text Searches
What it is: Language-specific searches on related words.
Why I like it: This function allows for a full text search to query related words in a post. These can be indexed using functional indexes below as long as the system knows which language to search under. So imagine if you put a text field that says "I am going to test this for the customer next week" but when you go back and search you search for "testing for customer" the result will still show up, because PostgreSQL knows that in English, "test" and "testing" can be two forms of the same word.
How I'd use it: Any application which requires full text searches can take advantage of this to provide a much richer experience.
The LedgerSMB budget module currently uses full text searches for the descriptions of budgets. Currently this is locale specific. Future versions will probably allow notes to be in various languages and searched via a full text search per language.
9: Pgcrypto
What it is: Cryptographic extensions to PostgreSQL
Why I like it: With pgcrypto, one can design systems which provide strong protection of data in the database, making the encryption transparent to the application.
How I'd use it: If someone wanted to store, say, credit card data in the database, this could be installed using suitable forms of managed keys such that the user could unlock a key and retrieve the data. This could then be handled such that the application itself would not need to handle encryption or decryption of the data, thus allowing other applications to handle key management and other important tasks.
8: Functional Indexes
What it is: Indexes on the outputs of immutable functions
Why I like it: Functional indexes allow us to store information functionally dependent on information actually stored in the database. Consequently just because we need to search on information. This is available in Oracle and to a far lesser extent in MS SQL and DB2. It is not available in MySQL.
How I'd use it: Functional indexes are behind language-dependent full-text indexing (and hence can make searches there a lot faster). It also has a large number of other uses.
For example, suppose I am storing a large bytea column and I want to search quickly on results. I could index md5(my_bytea) and then search on input = my_bytea and md5(input) = md5(my_bytea). This would allow an index scan to pull out matching md5's and then checking the full result against matching records. For many other databases, one would have to store both the original data and the md5 checksum.
Another use would be to ensure uniqueness based on function output. So for example, we could:
CREATE UNIQUE INDEX u_functional_index ON my_table (lower(my_col));
Then values would be rejected if they are non-unique to a case insensitive search.
7: Partial Indexes
What it is: Indexes on a subset of rows.
Why I like it: Partial indexes allow the development of relatively complex data constraints. For example, one can require a unique constraint on a subset of rows matching a certain criteria, since uniqueness is implemented on the index. Of the other major RDBMS's, only Microsoft SQL Server implements anything like this.
How I'd use it: Partial indexes can be used in two fundamentally different ways: speeding up common search criteria and enforcing partial unique constraints. Many are familiar with speeding up searches in this way, but partial unique indexes are where the real power lies.
So suppose ar and ap invoices share the same physical table, but ar invoices must have unique invoice numbers while ap invoices must be unique per vendor. We can use partial unique indexes to support both these constraints independently.
CREATE UNIQUE INDEX u_ar_invnumber ON invoices (invnumber) WHERE type = 'ar';
A second important application is adding forward-only unique constraints. These are helpful where a new unique constraint needs to be added immediately but not all data in the table conforms to that constraint. As a stop-gap measure (while data is being cleaned up), one might use a partial unique constraint to encourage uniqueness going forward. Something like:
CREATE UNIQUE INDEX u_ar_invnumber ON invoices (invnumber) WHERE type = 'ar' AND id > 12345;
Data can be cleaned up (if possible), then the index can be dropped and recreated with a different where clause.
6: Third Party Solutions
What it is: An extensive set of third party add-ons for PostgreSQL
Why I like it: There's so much to choose from!
A few of my favorites: PGTAP is a test harness for stored procedures, building on other test harness frameworks. It allows you to write production-safe test cases and run them without risking your data. Very very good.
Slony-I is a very good well-developed single-master, asynchronous replication system for PostgreSQL. it was developed to help ensure the .org domain registrar would scale appropriately. Slony-I supports, among other things, replicating between different versions of PostgreSQL, something the native solutions do not.
Speaking of replication, Bucardo allows you to do multi-master and master-slave asynchronous replication as well.
PostGIS provides an excellent basis for GIS applications, and there are many, many other third party, free software add-ons for PostgreSQL Uses are largely limited by the imagination.
5: Language Handles
What it is: Define your own stored procedure languages!
Why I like it: Makes migration often more feasible than it would be otherwise, and exposes much advanced functionality to stored procedure interfaces.
How I'd use it: There are three major uses of stored procedure handles (i.e. to create stored procedures in languages other than C, SQL, and PL/PGSQL).
The first of these is migration. If one has an Oracle database which uses a lot of Java stored procedures you can often port these to PostgreSQL and PL/J with a minimum of headaches. Similarly if you have an MS SQL database with .Net stored procedures you may be able to make these work with PL/Mono.
The second is to make advanced features of a language available to your stored procedures. If you are doing a lot of text processing, PL/Perl may be helpful, but if you are doing a lot of statistical calculations, PL/R may be very helpful.
Finally (and here is the dangerous one), language handles can make it possible to do very un-database-like things from within stored procedures. So for example, you could use PL/SH to execute shell scripts from the stored procedures. However in general for these areas, external scripts and LISTEN/NOTIFY (see below) are better tools.
4: Complex Types
What it is: Tuple-based types, often used as function output types.
Why I like it: These provide an opportunity to create a very rich API within the db. PostgreSQL supports not only pure tuples but tuples which contain arrays of other tuples as their members.
How I'd use it: In LegerSMB, we use complex types to define much of our interface for the application and this dependency will increase over time. These types allow one to represent in SQL form any well-structured representation of data in any other application.
These can then be used to define inputs and outputs of stored procedures.
3: Listen/Notify
What it is: Notification of specified database events.
Why I like it: this provides a notification service within the database. Applications listen for a specified notification handle, and other functions can raise those notifications. This allows one to create message queues (similar to Oracle's Database Notification Services), but that's not all.
Because the notifications are decoupled from low-level database operations, one can raise the notifications based on arbitrary criteria. Suppose we want to raise a notification anytime the postgres user deletes data from the audit trail? We can do that. No problem. Similarly we could raise a notification anytime
How I'd use it: You can use LISTEN/NOTIFY to send a message to a script that information has been dropped in a table and is ready to email out to a decision-maker. For example you could use this to provide real-time emailing when, for example, a part's on-hand value drops below the reorder point. In this way all manner of message queues can be created using the database as a center.
On top of that one can send notifications out around things that do not modify the database. I don't have a specific use case for this yet, but the fact that it is there means that many other things suddenly become possible.
2: Windowing Functions
What it is: The ability to have an aggregate function that aggregates incrementally over a set or portion of a set. Although most RDBMS's these days do support windowing functions (MySQL, DB2, Oracle, and MS SQL all do), this is a feature which is often overlooked.
Why I like it: This allows you to specify aggregates which are run incrementally across a set. This set can be partitioned and ordered independent of any ordering on the final set.
How I'd use it: There are huge numbers of uses for this sort of thing. For example, windowing functions make it possible to select the highest ranked rows within a set by certain criteria. They also allow one to add running totals to financial reports, or add running totals within specific transactions for financial reports.
No doubt there are many many more uses I just haven't thought of or come across. However if you think about being able to use windowing functions the sorts of queries you can write are well expanded.
1: PL/PGSQL
What it is: A very clear, concise, well organized language for stored procedures. Borrowing heavily in approach from Oracle's PL/SQL, this language is a mixture of Ada, PL/I, and SQL. The result, while different from what most programmers are used to, highly readable, easy to follow, and extremely powerful.
Why I like it: PL/PGSQL adds almost everything you could want that vanilla SQL does not offer in a procedural language. Conditionals, loops, and named variables all come to mind. But what it has going for it is conciseness. Often times I hear people say they are "just" doing stored procedures in PL/PGSQL. There is no "just" about it though. This language is very well designed for what it does.
How I'd use it: LedgerSMB currently contains most SQL in the newer code inside stored procedures written in PL/PGSQL. We use it for most procedures with large numbers of arguments or where the control structures are at all helpful. In essence we use it a LOT.
The above list is largely arbitrary. Another author might put in the fact that data definition language calls are usually transaction-safe (meaning you can roll back a transaction that dropped a table), incremental backups, or any number of other features. However the features that I have included above are generally those which allow PostgreSQL to be used as a very extensible database server. It is my view that these features, which are at their best when creating intelligent databases, are frequently under-used especially by small developers because they think of PostgreSQL as merely a data store instead of the very powerful system that it is.
What features do you think are underrated? Comment below to mention them!
Tuesday, October 18, 2011
The Design of LedgerSMB 1.3's Authentication/Authorization/Auditing System
LedgerSMB 1.3 and higher uses database roles for group/user permissions management. Every user is a database user, and permissions are managed by roles which are granted permissions on a functional requirements basis to underlying database objects (tables and functions).
In doing so, we can take advantage of a number of pieces of advanced PostgreSQL functionality to provide a much richer and more secure application environment. At the same time there are limitations with this system which must be taken into account as well.
Basic Design Requirements
The LedgerSMB Authentication, Authorization, and Auditing system was designed to offer real security even as the older codebase inherited from SQL-Ledger did not admit of any real security controls, as well as offering an extensible set of authentication options, and an ability to flexibly and consistently apply security permissions across several databases in a cluster.
The basic use cases are as follows:
Overview of Design
The current design uses PostgreSQL roles for managing permissions. On the most basic level every user is a database user. Roles with the nologin attribute is set are used to assign permissions to underlying relations and functions. Generally, the simplest permissions are favored over the most complex. This means that operations that correspond exactly with relational operations are granted relational permissions, while those that do not are granted functional permissions.
Some of these roles are granted to other no-login roles to serve as larger aggregations of permissions. Users may be granted either sets of roles. A set of administrative functions are provided by the software to manage user accounts, permissions, and reset passwords (if applicable).
Obstacle 1: Roles are global to a cluster
This is both an obstacle and a useful tool. This allows users to be consistent across databases, but also poses a challenge regarding the larger database clusters because the roles which grant specific permissions end up being cluster-global as well.
We resolved this by making the db-specific roles prefixed with lsmb_[database_name]__ (note the double underscore after the database name).
Obstacle 2: Authentication methods
Authentication necessarily requires a few specific pieces of information to be passed to the database server, and these must be obtained from the user. The first is a username, and the second is some sort of authentication token. Any intermediary framework must then be able to pass these credentials on to the database in such a way as to properly log in. This can consist of three basic types:
The other two break down into two questions:
On the web server, for password authentication, the re-use requirement means that the only possible method for authentication is HTTP-Basic.
On the database server, client-side SSL certificates can only be used to authenticate the web server to the database server. They cannot be used to authenticate users, and so are a not a possible choice for database authentication for a web app of this sort.
Security Tradeoffs
Security, it could be said, is an endless series of tradeoffs. These tradeoffs not only include usability vs security, but also how mitigation of concerns are prioritized.
This approach prioritizes mitigating SQL injection and privilege escalation issues in the web application above possible flaws in the PostgreSQL authentication mechanisms and misconfiguration of the web server.
We believe that there is general benefit in pushing authentication back to methods which are better tested and have more peer review. The PostgreSQL community has been, in our experience, extremely professional regarding security, further raising our confidence. So this first trade-off is easily justifiable.
The second tradeoff is a little harder to justify at the outset. If passwords must be re-used by the web server, zero-knowledge authentication systems cannot work. This means the password must be passed in clear text to the web server. This leads to several kinds of attacks, including malicious admins logging passwords, use of the web server without SSL allowing passwords to be overheard by an eavesdropper. We address the second type of attack by requiring SSL in the default configuration of LedgerSMB.
The issues with malicious system administrators can never be completely mitigated using password authentication. If this is a consideration, it's best to look to something like Kerberos (which could be supported on 1.3 with a little bit of effort). With Kerberos support, the web application, the database, and the user can all maintain a healthy distrust of eachother.
Other Implementation Details
A few other basic things are worth mentioning here.
The first is that PostgreSQL allows passwords to expire. We make use of this to differentiate between user-set passwords (valid for a configurable length of time) and administrative set password (valid for one day). Users can reset their own passwords.
Administrators of the application can issue temporary password resets. The way this works is that the administator makes up a password for the user, and the user logs in, and is prompted to change it.
In doing so, we can take advantage of a number of pieces of advanced PostgreSQL functionality to provide a much richer and more secure application environment. At the same time there are limitations with this system which must be taken into account as well.
Basic Design Requirements
The LedgerSMB Authentication, Authorization, and Auditing system was designed to offer real security even as the older codebase inherited from SQL-Ledger did not admit of any real security controls, as well as offering an extensible set of authentication options, and an ability to flexibly and consistently apply security permissions across several databases in a cluster.
The basic use cases are as follows:
- A single business will only have one database. The system shouldn't be unmanageable for such a business.
- An accountant/bookkeeper might access books for several businesses, and would need consistent access across databases.
- A midsize business with several subsidiaries might need some people to have access across databases, and others to have permission only to specific databases. Such a business might also need some sort of single sign-on technology, or even integrated authentication.
Overview of Design
The current design uses PostgreSQL roles for managing permissions. On the most basic level every user is a database user. Roles with the nologin attribute is set are used to assign permissions to underlying relations and functions. Generally, the simplest permissions are favored over the most complex. This means that operations that correspond exactly with relational operations are granted relational permissions, while those that do not are granted functional permissions.
Some of these roles are granted to other no-login roles to serve as larger aggregations of permissions. Users may be granted either sets of roles. A set of administrative functions are provided by the software to manage user accounts, permissions, and reset passwords (if applicable).
Obstacle 1: Roles are global to a cluster
This is both an obstacle and a useful tool. This allows users to be consistent across databases, but also poses a challenge regarding the larger database clusters because the roles which grant specific permissions end up being cluster-global as well.
We resolved this by making the db-specific roles prefixed with lsmb_[database_name]__ (note the double underscore after the database name).
Obstacle 2: Authentication methods
Authentication necessarily requires a few specific pieces of information to be passed to the database server, and these must be obtained from the user. The first is a username, and the second is some sort of authentication token. Any intermediary framework must then be able to pass these credentials on to the database in such a way as to properly log in. This can consist of three basic types:
- Password based authentication, where the authentication is a memorized password, typed into a terminal,
- Token-based authentication, where the token is supplied from an external source, and
- Two factor authentication, where the both of the above methods are required.
The other two break down into two questions:
- What can the web server pass on to the database server? and
- What methods can the database server use to authenticate the user?
On the web server, for password authentication, the re-use requirement means that the only possible method for authentication is HTTP-Basic.
On the database server, client-side SSL certificates can only be used to authenticate the web server to the database server. They cannot be used to authenticate users, and so are a not a possible choice for database authentication for a web app of this sort.
Security Tradeoffs
Security, it could be said, is an endless series of tradeoffs. These tradeoffs not only include usability vs security, but also how mitigation of concerns are prioritized.
This approach prioritizes mitigating SQL injection and privilege escalation issues in the web application above possible flaws in the PostgreSQL authentication mechanisms and misconfiguration of the web server.
We believe that there is general benefit in pushing authentication back to methods which are better tested and have more peer review. The PostgreSQL community has been, in our experience, extremely professional regarding security, further raising our confidence. So this first trade-off is easily justifiable.
The second tradeoff is a little harder to justify at the outset. If passwords must be re-used by the web server, zero-knowledge authentication systems cannot work. This means the password must be passed in clear text to the web server. This leads to several kinds of attacks, including malicious admins logging passwords, use of the web server without SSL allowing passwords to be overheard by an eavesdropper. We address the second type of attack by requiring SSL in the default configuration of LedgerSMB.
The issues with malicious system administrators can never be completely mitigated using password authentication. If this is a consideration, it's best to look to something like Kerberos (which could be supported on 1.3 with a little bit of effort). With Kerberos support, the web application, the database, and the user can all maintain a healthy distrust of eachother.
Other Implementation Details
A few other basic things are worth mentioning here.
The first is that PostgreSQL allows passwords to expire. We make use of this to differentiate between user-set passwords (valid for a configurable length of time) and administrative set password (valid for one day). Users can reset their own passwords.
Administrators of the application can issue temporary password resets. The way this works is that the administator makes up a password for the user, and the user logs in, and is prompted to change it.
Thursday, October 13, 2011
SODA in LedgerSMB 1.3: A Critical Perspective
Following my Introduction to SODA post, I figured it would be a good idea to discuss the implementation of this on LedgerSMB 1.3, as well as a critical perspective as to how it has not only delivered benefits but also fallen short of our goals, and what will be done to remedy this situation. Future follow-up posts will flesh out how we will implement this in the database, as well as reference mappers in Perl, as well as possibly other languages (PHP and Python are being considered).
Quick Review
SODA, or Service Oriented Database Architecture, is the approach the LedgerSMB project towards bridging the object oriented vs relational paradigms. Unlike ORM systems, which design the programming objects first and then map these to a relational model either automatically (by creating tables) or manually (by defining mappings), SODA makes the relational design the primary one and then maps the relations out to interface types, which eventually will become semantically meaningful enough to define object-oriented interfaces to the underlying relational operations.
SODA is based on the idea that the database is the center of the processing environment, not merely the bottom tier of a specific application. If it is a bottom tier, it is a bottom tier that many applications may use in common, and where consistency and security may need to be enforced across applications. The database must then not only store, retrieve, and manipulate data, but must offer consistent interfaces for doing this. These interfaces should be self-documenting to the point that a code generator can pick up on them and create wrapper classes. Additionally the database should be designed such that multiple programs can coordinate with eachother though the database.
LedgerSMB is closely tied to the PostgreSQL RDBMS, which we believe provides an ideal feature set for addressing these concerns. PostgreSQL is one of the most extensible and powerful database management systems anywhere, it is open source, and it performs well under the workloads we care about.
The inspiration for SODA has come in part from the idea of RESTful web services and the emphasis on semantic clarity and re-use of existing frameworks to achieve loosely coupled, interactive systems.
PostgreSQL features currently used
As of LedgerSMB 1.3, stored procedures and to a lesser extent custom types play a central role in our approach. Simple stored procedures are written in SQL, while more complex ones benefit from the semantic sugar and control structures of PL/PGSQL. For those who haven't played with PL/PGSQL, it's PostgreSQL's answer to Oracle's PL/SQL, both being essentially PL/1 programming structures with integrated SQL query execution. The language is clear, concise, and powerful enough for virtually all database access, and it performs quite well under virtually all workloads.
Because LedgerSMB 1.3 is the first branch to use this approach, many concepts are necessarily underdeveloped. Additionally, LedgerSMB 1.3 was originally developed with PostgreSQL 8.1 in mind-- we started when PostgreSQL 8.3 had just been released. Now we support 8.2 and higher. This means that many of the advanced features of composite types, such as arrays of them, are unavailable in the current version, necessitating sometimes ugly and very un-SODA-like workarounds.
Additionally, the LISTEN/NOTIFY framework is used in some cases, sometimes with queue tables, as a transactional form of inter-process communication, but currently since LedgerSMB is usually installed as a single application, there are few opportunities to make use of this within the software distribution itself.
Implementation on LedgerSMB 1.3
SODA in 1.3 is very stored-procedure centric. Stored procedures are defined. Their arguments are named for object properties, prefixed with in_ to avoid collisions with columns. The stored procedure names themselves are supposed to be sufficiently descriptive to allow them to be bundled together into classes. In particular, the format is supposed to be the class name followed by a double underscore, followed by the method name. In practice this is very hard to maintain because double and single underscores are not very visually distinct from eachother. At the same time it isn't clear there is an alternative regarding naming of stored procedures.
On the Perl side, the magic occurs in two modules which contain the only SQL code in new modules for LedgerSMB 1.3: LedgerSMB and LedgerSMB::DBObject. LedgerSMB.pm's method "call_procedure" provides a low-level query generation for a named stored procedure along with enumerated arguments, while LedgerSMB::DBObject's "exec_method" provides higher-level mapping for stored procedures arguments to stored procedure mapping. These provide methods of adding ORDER BY clauses to queries but few other functions at the moment.
These methods return lists of hashrefs, and handling these is the application's job. One useful function is LedgerSMB->merge() which merges the properties of a hashref into the current object.
The actual working classes in perl inherit these methods from DBObject and this allows for extremely simple perl wrappers for the stored procedure. A typical wrapper may contain only a few lines of code, like this:
=item get_asset_class()
Requires id to be set.
Sets all other standard properties if the record is found.
=cut
sub get_asset_class {
my ($self) = @_;
my ($ref) = $self->exec_method(funcname => 'asset_class__get');
$self->merge($ref);
}
The inputs then come from the web interface and get passed to the stored procedures quickly passed in. In other words, the stored procedures define the application and everything else is written around these stored procedures.
Benefits Realized
As mentioned in my previous post, This framework in LedgerSMB 1.3 has allowed us to separate relatively agile development on the front-end from heavy engineering on the back end. This has allowed us to re-engineer a surprising subset of the application remarkably quickly for customers requiring significant scalability. In general, it is far easier to troubleshoot and correct both misbehavior and poor performance with hand-tuned SQL and PL/PGSQL stored procedures than it is to troubleshoot the queries which ORM's may generate.
Improvement possible
There are many improvements that can be made to our current implementation. Although the framework so far has resulted many benefits these are not as great as they could be or what is needed as we get deeper into the re-engineering of the financial portions of the application. Many of these limitations are due to limitations in older PostgreSQL versions which can be solved by taking advantage of newer features in 8.4 and higher. Other limitations are problems with approach which are solved by changing our coding conventions. The development of this approach will likely change over time.
SODA has on the whole been very helpful but has also fallen well short of what we have hoped it would do. It is by no means close enough to use to generate code in a variety of languages, and the interfaces are not as self-documenting or as manageable as we would like.
Shortcoming 1: Complex data handling
The first major shortcoming in LedgerSMB 1.3's design is that PostgreSQL, prior to 8.4, has been unable to handle arrays of complex data types, This means that complex types are relatively flat, and they lack the ability to express the sorts of data structures that object oriented programmers expect Starting with 8.4, however, this sort of expressiveness is built into the database back-end.
In 1.3 we employ two strategies for addressing this issue. The first is to use two dimensional arrays of the lowest denominator type (often TEXT) to simulate arrays of tuples. This however breaks discoverability because the programmer (and hence the software) has to have intimate knowledge of the internal structure of the array.
The second option is to use multiple cross-indexed arrays for different fields. This poses other discoverability issues, in particular that the argument names are insufficient by themselves to tell you the relationship between the arrays. Neither of these are very good approaches, but there aren't better ones that we have found for passing large quantities of data into stored procedures for PostgreSQL versions prior to 8.4.
Starting with PostgreSQL 8.4, however this picture changes dramatically. A tuple can contain members which are arrays of tuples. Thus we get a level of expressiveness in PostgreSQL types which provides equivalent capabilities to structural and object oriented programming. Looking towards LedgerSMB 1.4, we will make clear use of this feature.
Shortcoming 2: Class management
The second major difficulty at present is class management. The double-underscore is a poor separator between class and method name because it is not visually striking enough to readily notice when only a single underscore has been added. Unfortunately there isn't a clear replacement, and schemas have the disadvantage of being a flat namespace. Moreover schemas are already being considered for organizing the relational aspects of the software and so re-using schemas to solve this problem may cause more harm than good
Long-run I am looking at requiring that SODA functions accept the composite type that defines their first argument and allows for checking of return types which could then be handed off to constructors of other types. This area my not come into play until we begin work on LedgerSMB 2.0 though.
Shortcoming 3: Function Overloading and Interface Discovery
One obvious problem that crops up is that function overloading can cause problems here with overloaded functions. If you have two functions with the same name and number of arguments, but the arguments differ by type, how does the application discover which interface to use? Worse, if the argument names are the same, the 1.3 mapper will tend to grab the first (usually oldest) function and map the call to it. Our response in 1.3 was to include test cases that test for function overloading and fail if non-whitelisted functions are overloaded. This is far from optimal.
Although this problem was first brought up in relation to the fact that we use a weakly typed language (Perl), the language used has surprisingly little to do with the problem. The fundamental issue is discovery, and adding type handling to Perl, or switching to a strongly typed language in fact does not address the problem at all.
With a discoverable interface, the first thing that must be done is define the discovery criteria. With LedgerSMB 1.x that discovery criteria is the function name, which means that function overloading is out. In future versions we may add the first argument type to this discovery, and in that case, allow for limited function overloading as long as the first type is different.
Misc Other Areas of Improvement
As we move towards 1.4 and beyond I expect more focus to be given to consistent interfaces for non-object-oriented environments. Views, for example, may be added as consistent interfaces, displaying much of the data that would be accessible via a stored procedure.
Additionally, we are looking at adding more type handling to Perl via Moose in order to solve the problem of internationalization of date and number formats, and other handling of type-specific operations. This is beginning in 1.4 and will be on-going.
Conclusions
While SODA has delivered many benefits to LedgerSMB, it has not quite yet achieved all we'd like it to achieve. The goal of fully discoverable interfaces in the database has not been reached yet. A great deal of effort is yet to be done to reach this goal.
Tuesday, October 11, 2011
Next Steps with LedgerSMB 1.3 and 1.4
Now that LedgerSMB 1.3 is released, it's worth looking at the next steps.
The first is to correct new bugs reported by users in this initial phase. All issues deferred in 1.3.0 should be fixed in 1.3.1 unless something really urgent comes up.
The second is to get a few add-ons developed during 1.3 into a more production-worthy state. These include addons for budgetting, and an improved trial balance module which should perform better on big databases and also provides a number of enhanced features, such as the ability to run trial balances for a subset of accounts.
Following this, we hope to see the continued evolution of the 1.3 series through a variety of sources including community support. This will happen during the development of the 1.4 series.
Some work has already been done on 1.4 including removing dependencies on the PostgreSQL contrib modules (replacing connectby() with WITH RECURSIVE common table expressions), and the beginnings of re-engineering the financial schema to be more SODA-like. Of course these mean that PostgreSQL 8.4 will be the minimal version usable there.
A number of important framework changes will be made also including depending on Moose for new Perl code, and using a number of advanced PostgreSQL features including arrays of complex types, again in keeping with moving to 8.4 as the minimally required version.
The first is to correct new bugs reported by users in this initial phase. All issues deferred in 1.3.0 should be fixed in 1.3.1 unless something really urgent comes up.
The second is to get a few add-ons developed during 1.3 into a more production-worthy state. These include addons for budgetting, and an improved trial balance module which should perform better on big databases and also provides a number of enhanced features, such as the ability to run trial balances for a subset of accounts.
Following this, we hope to see the continued evolution of the 1.3 series through a variety of sources including community support. This will happen during the development of the 1.4 series.
Some work has already been done on 1.4 including removing dependencies on the PostgreSQL contrib modules (replacing connectby() with WITH RECURSIVE common table expressions), and the beginnings of re-engineering the financial schema to be more SODA-like. Of course these mean that PostgreSQL 8.4 will be the minimal version usable there.
A number of important framework changes will be made also including depending on Moose for new Perl code, and using a number of advanced PostgreSQL features including arrays of complex types, again in keeping with moving to 8.4 as the minimally required version.
Introduction to SODA
LedgerSMB 1.3 provides what is a prototype for what I hope to develop in the future as a Service Oriented Database Architecture, or SODA. This is an early version, venturing largely into uncharted territory doing things that I am unaware of them being done elsewhere. This is one way we are pushing the envelope, and one way we will likely continue to do so.
As a note, LedgerSMB is closely tied to PostgreSQL, and as time goes on is likely to become essentially a PostgreSQL database and some sample applications written in Perl, with reference implementations in Python and PHP for further development.
What is SODA
SODA is an approach to database design which supports the idea that the database should be sufficiently semantically rich to provide code generators everything (in the model side of an MVC framework) they need to interoperate with the database. Objects and classes are, to the extent feasible, defined in the database and exported to applications, rather than defined in applications and mapped to relations using ORMs. In a perfect database implementing SODA, it would be possible to run a code generator in a language of your choice, and have it define classes and methods based on database types and stored procedures, which could then be used by your application quickly.
The key to SODA is to use what tools the database provides to make interfaces discoverable. In PostgreSQL these involve such things as return types and argument names. Examples will follow.
Basic Principles of SODA
Role of RDBMS in SODA-based Applications
SODA envisions the RDBMS not as the lower tier(s) in a multi-tier application infrastructure but as the centerpiece of both the application and its development environment. SODA doesn't necessarily envision everything being in the database, far from it. Nor does SODA necessarily envision that the database takes the place of middleware, but rather the database takes on many traditional middleware roles.
Instead, SODA sees the database in the center of the environment, and assumes that many clients will in fact be connecting to the database for a variety of uses, and that the RDBMS becomes a point of collaboration between applications (queue tables and LISTEN/NOTIFY come in handy here), and hence the point where security must be enforced. Security can be enforced on the procedural or relational level.
Instead of "everything in the database" then, we have the idea that everything which is required to read and write data in a useful manner should be in the database, along with whatever is required to coordinate between applications. The applications can then re-use business logic in a consistent way. Logic which is not required for these two tasks does not belong in the database, and even some coordination frameworks may better reside elsewhere. At the same time, SODA is very stored-procedure centric.
Examples
The following is a stored procedure which provides an ability to search for fixed asset classes (groups of fixed assets depreciated together with the same method):
CREATE OR REPLACE FUNCTION asset_class__search
(in_asset_account_id int, in_dep_account_id int,
in_method int, in_label text)
RETURNS SETOF asset_class_result AS
$$
DECLARE out_var asset_class_result;
BEGIN
FOR out_var IN
SELECT ac.id, ac.asset_account_id, aa.accno, aa.description,
ad.accno, ad.description, m.method, ac.method,
ac.label
FROM asset_class ac
JOIN account aa ON (aa.id = ac.asset_account_id)
JOIN account ad ON (ad.id = ac.dep_account_id)
JOIN asset_dep_method m ON (ac.method = m.id)
WHERE
(in_asset_account_id is null
or in_asset_account_id = ac.asset_account_id)
AND (in_dep_account_id is null OR
in_dep_account_id = ac.dep_account_id)
AND (in_method is null OR in_method = ac.method)
AND (in_label IS NULL OR ac.label LIKE
'%' || in_label || '%')
ORDER BY label
LOOP
RETURN NEXT out_var;
END LOOP;
END;
$$ language plpgsql;
COMMENT ON FUNCTION asset_class__search
(in_asset_account_id int, in_dep_account_id int,
in_method int, in_label text) IS
$$ Returns a list of matching asset classes. The account id's are exact matches
as is the method, but the label is a partial match. NULL's match all.$$;
Now, in the above example the key is in the header to the function:
CREATE OR REPLACE FUNCTION asset_class__search
(in_asset_account_id int, in_dep_account_id int,
in_method int, in_label text)
From the name of the function, we know that this is for asset classes. But thats not all we know. We know exactly what information the stored procedure expects. The variable names, prefixed with in_ to avoid collision with column names, tell the query generator what information to pull from the HTTP request object to build the search. In this case, it will pull in the asset_account_id, dep_account_id, method, and label properties and map those into the stored procedure query.
Benefits of SODA
Thus far LedgerSMB has realized a number of benefits by applying these principles to our database design. The first and most obvious is reduced development time. With SODA, we define our database methods, then write Perl wrappers around the database procedures. Then we can quickly write HTML templates and workflow scripts. The heavy engineering is where it belongs: In the mapper functions and the database. The rest an be surprisingly agile.
A second important benefit is the fact that we have been able to retrofit real security into a codebase which did not admit to it when we began our fork. By pushing security back into the database, we have been able to leave untrustworthy code untrusted.
Finally, we have found that it is very often far easier to tune hand-written SQL in stored procedures than it is to tune queries written through, for example, ORMs, allowing us to be able to address scalability problems when they arise.
As a note, LedgerSMB is closely tied to PostgreSQL, and as time goes on is likely to become essentially a PostgreSQL database and some sample applications written in Perl, with reference implementations in Python and PHP for further development.
What is SODA
SODA is an approach to database design which supports the idea that the database should be sufficiently semantically rich to provide code generators everything (in the model side of an MVC framework) they need to interoperate with the database. Objects and classes are, to the extent feasible, defined in the database and exported to applications, rather than defined in applications and mapped to relations using ORMs. In a perfect database implementing SODA, it would be possible to run a code generator in a language of your choice, and have it define classes and methods based on database types and stored procedures, which could then be used by your application quickly.
The key to SODA is to use what tools the database provides to make interfaces discoverable. In PostgreSQL these involve such things as return types and argument names. Examples will follow.
Basic Principles of SODA
- Interfaces must be discoverable, meaning stored procedures, data types, etc. can all be discovered at run time and automatically used.
- The application should be loosely tied to the database, and other applications should be able to to use the database easily, in any other language.
- All stored procedures should always return useful data. A stored procedure which inserts, for example, might return the primary key of the row inserted or even the whole row (allowing, for example, for defaults to be filled in). Returning VOID or NULL, or even TRUE, is generally frowned on and to be avoided.
- Security is enforced primarily on the database level, with applications presumed to know what operations their users are permitted to do. If the application fails to enforce security, the database will, and the user may get an error.
- Exceptions in functions should be short and machine-readable. "No password" is fine but "You forgot to enter a password. Please try again" is not. The application should be responsible for translating this into a useful message to the user. This is in part because translation into other languages in the db is problematic using stock procedural languages, and also because applications may wish to handle exceptions themselves.
Role of RDBMS in SODA-based Applications
SODA envisions the RDBMS not as the lower tier(s) in a multi-tier application infrastructure but as the centerpiece of both the application and its development environment. SODA doesn't necessarily envision everything being in the database, far from it. Nor does SODA necessarily envision that the database takes the place of middleware, but rather the database takes on many traditional middleware roles.
Instead, SODA sees the database in the center of the environment, and assumes that many clients will in fact be connecting to the database for a variety of uses, and that the RDBMS becomes a point of collaboration between applications (queue tables and LISTEN/NOTIFY come in handy here), and hence the point where security must be enforced. Security can be enforced on the procedural or relational level.
Instead of "everything in the database" then, we have the idea that everything which is required to read and write data in a useful manner should be in the database, along with whatever is required to coordinate between applications. The applications can then re-use business logic in a consistent way. Logic which is not required for these two tasks does not belong in the database, and even some coordination frameworks may better reside elsewhere. At the same time, SODA is very stored-procedure centric.
Examples
The following is a stored procedure which provides an ability to search for fixed asset classes (groups of fixed assets depreciated together with the same method):
CREATE OR REPLACE FUNCTION asset_class__search
(in_asset_account_id int, in_dep_account_id int,
in_method int, in_label text)
RETURNS SETOF asset_class_result AS
$$
DECLARE out_var asset_class_result;
BEGIN
FOR out_var IN
SELECT ac.id, ac.asset_account_id, aa.accno, aa.description,
ad.accno, ad.description, m.method, ac.method,
ac.label
FROM asset_class ac
JOIN account aa ON (aa.id = ac.asset_account_id)
JOIN account ad ON (ad.id = ac.dep_account_id)
JOIN asset_dep_method m ON (ac.method = m.id)
WHERE
(in_asset_account_id is null
or in_asset_account_id = ac.asset_account_id)
AND (in_dep_account_id is null OR
in_dep_account_id = ac.dep_account_id)
AND (in_method is null OR in_method = ac.method)
AND (in_label IS NULL OR ac.label LIKE
'%' || in_label || '%')
ORDER BY label
LOOP
RETURN NEXT out_var;
END LOOP;
END;
$$ language plpgsql;
COMMENT ON FUNCTION asset_class__search
(in_asset_account_id int, in_dep_account_id int,
in_method int, in_label text) IS
$$ Returns a list of matching asset classes. The account id's are exact matches
as is the method, but the label is a partial match. NULL's match all.$$;
Now, in the above example the key is in the header to the function:
CREATE OR REPLACE FUNCTION asset_class__search
(in_asset_account_id int, in_dep_account_id int,
in_method int, in_label text)
From the name of the function, we know that this is for asset classes. But thats not all we know. We know exactly what information the stored procedure expects. The variable names, prefixed with in_ to avoid collision with column names, tell the query generator what information to pull from the HTTP request object to build the search. In this case, it will pull in the asset_account_id, dep_account_id, method, and label properties and map those into the stored procedure query.
Benefits of SODA
Thus far LedgerSMB has realized a number of benefits by applying these principles to our database design. The first and most obvious is reduced development time. With SODA, we define our database methods, then write Perl wrappers around the database procedures. Then we can quickly write HTML templates and workflow scripts. The heavy engineering is where it belongs: In the mapper functions and the database. The rest an be surprisingly agile.
A second important benefit is the fact that we have been able to retrofit real security into a codebase which did not admit to it when we began our fork. By pushing security back into the database, we have been able to leave untrustworthy code untrusted.
Finally, we have found that it is very often far easier to tune hand-written SQL in stored procedures than it is to tune queries written through, for example, ORMs, allowing us to be able to address scalability problems when they arise.
Subscribe to:
Posts (Atom)