Efficient UNIX Memory management for Running MapReduce Jobs.
We are trying to set up a single node cloudera hadoop cluster having 16 GB as RAM on linux machine. We are setting up 5.4.2 version.
Now when we check statistics post the installation and run the top command we find that only 1 -2 GB is available. when we trigger map reduce sample job - no memory is allocated to the job and so job doesnt run.
Can you please let us know what should we do so that more memory is available
Analysis of the top command
cloudera takes 4-5 GB.
mysql 6 GB.(external database to store the metastore) other services 2-3 GB. thus contributing to 13 GB out of 16
Hello all.
I have a script that uses two arrays in the beginning. Saves certain values that i am extracting from df -h command.
array1 and array2 where i is from 0 to 9.
It then goes on and saves the values of the arrays into variables.
for i 0 to 9 , tmp= array2 // I am no writing the... (4 Replies)
Hello all, I have a quick question. I work in a computational science laboratory, and we recently got a few mac pros to do molecular optimizations on. However, on our normal supercomputers, there are queue systems, mainly PBS.
Anyway, the macs obviously don't have PBS, but I've read about... (0 Replies)
hi, i give a exit to the system, but it says that i have running jobs... so i do a ps and it displays two lines, one is a -ksh and the other is the ps which i am issuing...
then i give a who -uH, find my pts.. then do a grep... still the same.....
whats wrong.. (6 Replies)