Fri Aug 17, 2018 3:09 am
Login Register Lost Password? Contact Us


Memory pool exhausted error

Topics related to the set of Machine Learning libraries and Matrix processing algorithms

Mon Jun 22, 2015 8:23 pm Change Time Zone

Dear Team,

When I am trying to run stacked auto-encoders code for MNIST dataset, I am getting below error:

Error: System error: 1301: Memory pool exhausted: pool (1216 pages) exhausted, requested 61 (in Rollup Group G85 E89) (0, 0), 1301

Can someone help me in resolving this? Thanks in advance. Any specific requirements to run this code?

Thanks Again,
Pooja
chennapooja
 
Posts: 61
Joined: Wed Oct 08, 2014 11:49 pm

Tue Jun 23, 2015 4:57 pm Change Time Zone

Hi Pooja,

Please make sure you are reading the data correctly. Run the stack autoencoder program until line 795 when it reads the data and then simply print the output (OUTPUT(input_data_tmp,named('data'));) and make sure the data is read correctly.There should features f1 through f784 and label features shown in each row of the output.

-Maryam
maryamregister
 
Posts: 7
Joined: Tue Jun 23, 2015 4:51 pm

Tue Jun 23, 2015 10:58 pm Change Time Zone

Dear Maryam,

Thanks for the inputs.

When I print the input_data_tmp, I could find zeroes everywhere. Will the pixel features do not change for different digits? Does it mean that data is corrupted?
Actually I have a doubt, I sprayed only images files "train-images.idx3-ubyte" to the thor not the labels file "train-labels.idx3-ubyte". So can I execute removing label from the value_record type?
I tried executing and I did not get output or errors even after 2 hours. How much time it usually takes for execution?

Thanks in advance,
Pooja.


Regards,
Pooja.
chennapooja
 
Posts: 61
Joined: Wed Oct 08, 2014 11:49 pm

Wed Jun 24, 2015 2:00 pm Change Time Zone

zeros are fine, they belong to the background in the digit images.
yes, you can remove labels from the value_record type. Please look at Stacked_SparseAutoencoder_test to see an example on a toy dataset.
stack sparse autoencoder algorithm does not need labels. however I had sprayed the Mnist along with the labels so I had to separate them first. But you don't need to. Please look at "Stacked_SparseAutoencoder_test" to see what you should do.
maryamregister
 
Posts: 7
Joined: Tue Jun 23, 2015 4:51 pm

Wed Jun 24, 2015 2:09 pm Change Time Zone

Thanks Maryam.

But I have done the same, removed labels and executed but its taking a lot of time to execute....not getting any errors and no output too....so I have aborted the execution after 800 minutes....does it take too long for getting output? I have checked that other sample example which has 6 input samples, that is being executed thoroughly.
chennapooja
 
Posts: 61
Joined: Wed Oct 08, 2014 11:49 pm

Wed Jun 24, 2015 2:35 pm Change Time Zone

what is the maximum iteration number you are using?
begin with low numbers and make sure the algorithm works and then increase the maximum iterations number.
maryamregister
 
Posts: 7
Joined: Tue Jun 23, 2015 4:51 pm

Wed Jun 24, 2015 2:59 pm Change Time Zone

Its only 2 iterations. Anyway I will try with 1 and check it out.


Thanks a lot,
Pooja.
chennapooja
 
Posts: 61
Joined: Wed Oct 08, 2014 11:49 pm

Wed Jun 24, 2015 3:22 pm Change Time Zone

with 2 iterations it takes almost 2 minutes
maryamregister
 
Posts: 7
Joined: Tue Jun 23, 2015 4:51 pm

Wed Jun 24, 2015 9:27 pm Change Time Zone

Dear Maryam,

When I aborted my previous execution and re-execute it I am getting the same error again "System error: 0: Graph[122], hashdistribute[126]: SLAVE #1 [10.0.1.1:20100]: FastLZExpander - corrupt data(1) 0 0, Received from node: 10.0.1.3:20100". I have checked my input_data_tmp, its same. I am getting the output for indepDataC but later its failing with above error. I am not sure why its failing when the same code did not give even an error before.
At times I am getting this error. Can you guide me through what the reason could be ?

Thanks,
Pooja.
chennapooja
 
Posts: 61
Joined: Wed Oct 08, 2014 11:49 pm

Thu Jun 25, 2015 3:50 pm Change Time Zone

My Guess is that the problem is your data, either you have not spared it correctly or something else.
Make a simple text file with a toy dataset, for example as simple as the below data:
1,0,2
3,4,6
3,7,8
4,9,8

Then try to spare and read the data. And then use it with stacked sparse autoencoder to see if it works.
Also you can't spare "train-images.idx3-ubyte" directly. First you have to convert it to a comma separated text file and then spray it. You can use "loadMNISTImages.m" from Stanford deep learning files to do that.
maryamregister
 
Posts: 7
Joined: Tue Jun 23, 2015 4:51 pm

Next

Return to Machine Learning

Who is online

Users browsing this forum: No registered users and 1 guest

cron