How big is 'stanford.train_input' ?

Did u try it with a small sample dataset, and if so, did that work OK?

On Tue, May 16, 2017 at 9:16 AM, Frank McQuillan <> wrote:
This does not look like a MADlib error.

There are a lot of Greenplum experts who respond to the questions on this mailing list:!forum/gpdb-users
so I would suggest you post your question there.


On Tue, May 16, 2017 at 6:44 AM, Luis Macedo <> wrote:
Try diminishing the scope of you data. 

There was not enough memory to run the python code it seams...

Luis Macedo | Sr Platform Architect | Pivotal Inc 

Mobile: +55 11 97616-6438

Take care of the customers and the rest takes care of itself

2017-05-16 6:58 GMT-03:00 Dmitry Dorofeev <>:
Hello, we are getting

psql:06_svm_train.sql:11: ERROR:  plpy.SPIError: plpy.SPIError: Function "madlib.__bernoulli_vector(integer,double precision,double precision,double precision,integer)": Memory allocation failed. Typically, this indicates that Greenplum Database limits the available memory to less than what is needed for this input.  (entry db greenplum.luxms:5432 pid=12385) (plpython.c:4648)
CONTEXT:  Traceback (most recent call last):
  PL/Python function "svm_classification", line 26, in <module>
    return svm.svm(**globals())
  PL/Python function "svm_classification", line 983, in svm
  PL/Python function "svm_classification", line 1103, in _transform_w_kernel
  PL/Python function "svm_classification", line 277, in fit
PL/Python function "svm_classification"

for the following SQL:

SELECT madlib.svm_classification ('stanford.train_input',
                                  'polynomial',   --linear | gaussian
                                  'coef0=0',             -- kernel params
                                  '',             -- grouping_col
                                  'max_iter=1,validation_result=stanford.validation_result');   -- max_iter=200

MADlib 1.9.1
Greenplum DB version 4.3ORCA
CentOS Linux release 7.2.1511

Any hint how we can tune memory settings to avoid such errors ?