PgMex Blog


PostgreSQL-Matlab connectivity

The new version of PgMex brings support for Matlab 2020a and PostgreSQL 12 along with performance improvements

We are happy to announce the new release of PgMex 1.2.0! Major changes: added support for Matlab 2019a-2020a and Postgres up to version 12.2 Performance improvements: improved stability and performance of batchParamExec Bug fixes: fixed bug in batchParamExec when values of scalar types represented in Matlab by strings (numeric, varchar, text, bpchar, name, xml, json) for several tuples are passed as a 2D char matrix with strings stacked up one above the other (this format is possible in the case all these strings are of equal length) fixed bug in batchParamExec when SIsValueNull is passed while SIsNull is empty

Free academic licenses for PgMex

We are happy to announce that, starting August 2018, academic licenses for PgMex (high-performance PostgreSQL client library for Matlab) will be free for full-time educational institutions (universities and colleges)! The free academic licenses can be obtained by submitting a form with your full name, an academic institution and a valid institution e-mail here: http://pgmex.alliedtesting.com/#academic-license. Please note that PgMex academic licenses are provided “as-is” and without full support, but with acceptance of bug reports. With your academic license we offer six months of free updates. For more information visit the PgMex website at: http://pgmex.alliedtesting.com.

Performance comparison of Postgres connectors in Matlab, Part 3: retrieving arrays

In this paper we continue the investigation of PostgreSQL connectors in Matlab started in Part I and Part II. In the latter paper we compared the performance of data retrieval from PostgreSQL for the case all the fields to be retrieved having scalar types. Compared were two connectors. The first one was Matlab Database Toolbox (working with PostgreSQL via a direct JDBC connection). The second connector is PgMex library (providing a connection to PostgreSQL via libpq library). Here we consider retrieving of data containing values of array types. [Read More]

Performance comparison of PostgreSQL connectors in Matlab, Part II: retrieving scalar data

In Part I of this paper we started our investigation of PostgreSQL connectors in Matlab. Namely, we compared the performance of different approaches to insert data into the PostgreSQL database. Some of those approaches are based on using Matlab Database Toolbox (working with PosgteSQL via a direct JDBC connection). Other ones are based on PgMex library (providing a connection to PostgreSQL via libpq library). Here we continue the comparison of Matlab Database Toolbox and PgMex library considering data retrieval. The given part of this paper covers retrieving only in the most simple case of scalar data, both of numeric and non-numeric types. In the performance benchmarks below we use the same data that was used in the previous article for data insertion benchmarks. As was mentioned previously, this data is based on daily prices of real stocks on some exchanges. Given the nature of such a financial data it is quite easy to image a few real-world scenarios where a possibility to retrieve this data in a large amounts very quickly is very important. Below we reveal some latent restrictions (concerning both performance, volumes and type of data to be processed) that do not allow our development team to use Matlab Database Toolbox in such scenarios. An alternative solution, PgMex library, was developed by our team to overcome these restrictions and to allow us to efficiently solve financial data processing problems. [Read More]

Performance comparison of PostgreSQL connectors in Matlab, Part I: inserting data

We actively develop software related to financial modelling and risk estimation. Most of model prototypes basing on data mining, machine learning, quantitative analysis we develop as well as certain parts of production code (thanks for JIT compilation and rich visualization capabilities) are implemented in Matlab. And naturally we always face the need to process huge amounts of financial data. Data processing usually generates even more data that needs to be stored somewhere in a persistent and consistent manner. There are many ways to achieve that, but for us the most reasonable choice is to use a relational database server. [Read More]