ID #1087

How do I use Colama to setup a grid?

Colama allows you to setup a Compute Grid Infrastructure using OpenMPI 

 

Steps to setup the Colama compute cluster slave nodes : 

  1. Setting up the master image.
    •  Install/Clone an Ubuntu Machine available on Colama Server.
    • Start the Virtual Machine, grab the console and login to the machine.
    • Fetch the install script from  colama server and execute it. 
    • # Fetch the script from the server 
      $ wget http://<colama-server name>/install-mpi.sh
      # Execute the script with root privileges 
      $ sudo bash install-mpi.sh
    • Optional: You can provide a name for your cluster by editing the file /etc/init/infracc-mpi-cluster.conf and editing the line env="default" to env="<custom cluster>". 


2 .  Now snapshot the machine. This is the process of creating the golden image for other slaves.

 

 

Note: Please wait for the snapshot to complete. The progress can be monitored in the Jobs tab.

 

 3.  On completion of the snapshot job go to "Library".  you select the "Grid Deploy" from the ops menu.

 

 

 

 

4. Enter the Grid Name , Size and comments for the grid in the form. You will have an option to deploy the cluster in Shared Deploy mode. 


 

5. After the Grid is successfully deployed you will be able to see in Deployments-> Labs menu. Start the grid and your slaves are up and running. 


 


 

 

 

 

*Notes:

  1. This will also setup the infrastructure required for  PVM
  2. This script will only run on Ubuntu Machines. 
  3. As per this infrastructure setup any machine can assume the role of the Master node. 
  4. This is an experimental setup. You can use it to verify the logical correctness of your programs. 
  5. If you want computation power, you can follow Step 1 on any standard Ubuntu Physical Machine and connect it to the same network as your colama server.

Sample Code

 

#include <stdio.h>
#include <stdlib.h>

#include <mpi.h>

int main(int argc, char *argv[]) 
{
  const int MASTER = 0;
  const int TAG_GENERAL = 1;
	
  int numTasks;
  int rank;
  int source;
  int dest;
  int rc;
  int count;
  int dataWaitingFlag;

  char inMsg;
  char outMsg;
	
  MPI_Status Stat;

  // Initialize the MPI stack and pass 'argc' and 'argv' to each slave node
  MPI_Init(&argc,&argv);

  // Gets number of tasks/processes that this program is running on
  MPI_Comm_size(MPI_COMM_WORLD, &numTasks);

  // Gets the rank (process/task number) that this program is running on 
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);

  // If the master node
  if (rank == MASTER) {
	
    // Send out messages to all the sub-processes
    for (dest = 1; dest < numTasks; dest++) {
      outMsg = rand() % 256;	// Generate random message to send to slave nodes

      // Send a message to the destination	
      rc = MPI_Send(&outMsg, 1, MPI_CHAR, dest, TAG_GENERAL, MPI_COMM_WORLD);			
      printf("Task %d: Sent message %d to task %d with tag %d\n",
             rank, outMsg, dest, TAG_GENERAL);
    }
		
  } 

  // Else a slave node
  else  {
    // Wait until a message is there to be received	
    do {
      MPI_Iprobe(MASTER, 1, MPI_COMM_WORLD, &dataWaitingFlag, MPI_STATUS_IGNORE);
      printf("Waiting\n");
    } while (!dataWaitingFlag);

    // Get the message and put it in 'inMsg'
    rc = MPI_Recv(&inMsg, 1, MPI_CHAR, MASTER, TAG_GENERAL, MPI_COMM_WORLD, &Stat);

    // Get how big the message is and put it in 'count'
    rc = MPI_Get_count(&Stat, MPI_CHAR, &count);
    printf("Task %d: Received %d char(s) (%d) from task %d with tag %d \n",
            rank, count, inMsg, Stat.MPI_SOURCE, Stat.MPI_TAG);
		
  }

  MPI_Finalize();
}



Steps to run the sample code:

 

1. Copy paste the above code in your favorite text editor and save it as mpi_test.c.  

2.  Run the following commands to compile and run your code. 

#Compile your program 
$ mpicc mpi_test.c -o mpi_test
#Running the program
$ mpirun -np 5 --hostfile /home/mpiuser/beowulf/default/mpi_hosts mpi_test


Tags: -

Related entries:

You can comment this FAQ