Xgu.ru теперь в Контакте  — приходите и подключайтесь.
Пока мы работаем над следующими видео, вы можете подключиться в Контакте. Познакомимся и обсудим новые страницы и ролики.


MPI кластер на Amazon EC2

Материал из Xgu.ru

Перейти к: навигация, поиск


[править] Описание

Оригинал: [1], [2]
Быстрая ссылка: http://xgu.ru/wiki/ec2/mpi

On-Demand MPI Cluster with Python and EC2 (part 1 of 3)

Posted by Peter Skomoroch on March 17th 2007 to Cluster Computing, Python, MPI, Amazon EC2, numpy

In this post, we will build a 20 node Beowulf cluster on Amazon EC2 and run some computations using both MPI and its Python wrapper pyMPI. This tutorial will only describe how to get the cluster running and show a few example computations. I’ll save detailed benchmarking for a later write-up.

One way to build an MPI cluster on EC2 would be to customize something like Warewulf or rebundle one of the leading linux cluster distributions like Parallel Knoppix or the Rocks Cluster Distribution onto an Amazon AMI. Both of these distros have kernels which should work with EC2. To get things running quickly as a proof of concept, I implemented a “roll-your-own” style cluster based on a Fedora Core 6 AMI managed with some simple Python scripts. I’ve found this approach suitable for running occasional parallel computations on EC2 with 20 nodes and have been running a cluster off and on for several months without any major issues. If you need to run a much larger cluster or require more complex user management, I’d recommend modifying one of the standard distributions. This will save you from some maintenance headaches and give you the additional benefit of the user/developer base for those systems.

The main task I use the cluster for is distributing large matrix computations, which is a problem well suited to existing libraries based on MPI. Depending on your needs, another platform such as Hadoop, Rinda, or cow.py might make more sense. I use Hadoop for some other projects, including MapReduce style tasks with Jython, and highly recommend it. That said, lets start building the MPI cluster…

The only prerequisite we assume is that the tutorial on Amazon EC2 has been completed and all needed web service accounts, authorizations, and keypairs have been created.

The command blocks which begin with peter-skomorochs-computer:~ pskomoroch$ are run on my local laptop, the commands preceded by -bash-3.1# or [lamuser@domu-12-31-33-00-03-46 ~]$ are run on EC2.

Its looking like this will be a long tutorial, so I’ll break it into three parts…

Update: March 5, 2007 - I’m in the process of publishing a public AMI, and have changed a few things in the tutorial. The steps describing copying over rsa keys have been moved from this post to part 2 of the tutorial. People interested in testing an MPI cluster on EC2 can skip all the installs and just use my example AMI with your own keys as described in part 2 Tutorial Contents: Part 1 of 3

  1. Fire Up a Base Image
        1. Amazon AWS AMI tools install
  2. Rebundle a Larger Base Image
  3. Uploading the AMI to Amazon S3
  4. Registering the Larger Base Image
  5. Modifying the Larger Image
        1. Yum Installs
        2. ACML Install
        3. Cblas Install
        4. Compile Numpy
        5. Scipy Install
        6. MPICH2 Install
        7. PyMPI install
        8. PyTables Install
        9. Configuration and Cleanup
       10. Creating a non-root user
       11. Adding the S3 Libraries
  6. Rebundle the compute node image
  7. Upload node AMI to Amazon S3
  8. Register Compute Node Image

Part 2 of 3

  1. Launching the EC2 nodes
  2. Cluster Configuration and Booting MPI
  3. Testing the MPI Cluster
  4. Changing the Cluster Size
  5. Cluster Shutdown

Part 3 of 3

  1. Basic MPI Cluster Administration on EC2 with Python
  2. Example application: Parallel Distributed Matrix Multiplication with PyMPI and Numpy
  3. Benchmarking EC2 for MPI

[править] Part 1

[править] Fire Up a Base Image

We will build our cluster on top of the Fedora Core 6 base image published by “marcin the cool”. Navigate to your local bin directory holding the Amazon EC2 developer tools and fire up the public image

peter-skomorochs-computer:~ pskomoroch$ ec2-run-instances ami-78b15411 -k gsg-keypair RESERVATION r-e264818b 027811143419 default INSTANCE i-2b1efa42 ami-78b15411 pending gsg-keypair 0

To check on the status of the instance run the following:

peter-skomorochs-computer:~ pskomoroch$ ec2-describe-instances i-2b1efa42 RESERVATION r-e264818b 027811143419 default INSTANCE i-2b1efa42 ami-78b15411 domU-12-31-33-00-03-46.usma1.compute.amazonaws.com running gsg-keypair 0

The status has changed from “pending” to “running”, so we are ready to ssh into the instance as root:

peter-skomorochs-computer:~ pskomoroch$ ssh -i id_rsa-gsg-keypair root@domU-12-31-33-00-03-46.usma1.compute.amazonaws.com The authenticity of host ‘domu-12-31-33-00-03-46.usma1.compute.amazonaws.com (’ can’t be established. RSA key fingerprint is ZZZZZZ Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘domu-12-31-33-00-03-46.usma1.compute.amazonaws.com,′ (RSA) to the list of known hosts -bash-3.1#

Here are some basic stats on the EC2 machine:

$ cat /proc/cpuinfo
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 37
model name      : AMD Opteron(tm) Processor 250
stepping        : 1
cpu MHz         : 2405.452
cache size      : 1024 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx  mmxext fxsr_opt lm 3dnowext 3dnow pni lahf_lm ts fid vid ttp
bogomips        : 627.50

The first change we make will be to modify the ssh properties to avoid timeouts:

Edit /etc/ssh/sshd_config and add the following line:

ClientAliveInterval 120

This image boots up fast, but it is missing a lot of basics along with the MPI libraries and Amazon AMI packaging tools. The main partition is fairly small, so before we start our installs, we will need to rebundle a larger version.

In order to rebundle, we need the Amazon developer tools installed…

[править] Amazon AWS AMI tools install

Install the Amazon AWS ami tools from the rpm:

 yum -y install wget nano tar bzip2 unzip zip fileutils
 yum -y install ruby
 yum -y install rsync make
 cd /usr/local/src
 wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.noarch.rpm
 rpm -i ec2-ami-tools.noarch.rpm

[править] Rebundle a Larger Base Image

Copy over the pk/cert files:

peter-skomorochs-computer:~ pskomoroch$ scp -i id_rsa-gsg-keypair ~/.ec2/pk-FOOXYZ.pem ~/.ec2/cert-BARXYZ.pem root@domU-12-31-33-00-03-46.usma1.compute.amazonaws.com:/mnt/ pk-FOOXYZ.pem 100% 721 0.7KB/s 00:00 cert-BARXYZ.pem 100% 689 0.7KB/s 00:00 peter-skomorochs-computer:~ pskomoroch$

Using the -s parameter we boost the trimmed down fedora core 6 image from 1.5 GB to 5.5 GB so we have room to install more packages (substitute own your cert and user option values from the Amazon tutorial).

-bash-3.1# ec2-bundle-vol -d /mnt -k /mnt/pk-FOOXYZ.pem -c /mnt/cert-BARXYZ.pem -u 99999ABC -s 5536 Copying / into the image file /mnt/image… Excluding:


1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.015051 seconds, 69.7 MB/s mke2fs 1.39 (29-May-2006) warning: 256 blocks unused.

Bundling image file… Splitting /mnt/image.tar.gz.enc… Created image.part.00 Created image.part.01 Created image.part.02 Created image.part.03 Created image.part.04 Created image.part.05 Created image.part.06 Created image.part.07 Created image.part.08 Created image.part.09 Created image.part.10 Created image.part.11 Created image.part.12 Created image.part.13 Created image.part.14 …<snip> Created image.part.39 Created image.part.40 Created image.part.41 Generating digests for each part… Digests generated. Creating bundle manifest… ec2-bundle-vol complete.

[править] Uploading the AMI to Amazon S3

This step is identical to the Amazon tutorial, use you own Amazon assigned AWS Access Key ID (aws-access-key-id) and AWS Secret Access Key (aws-secret-access-key). I’ll use the following values in the code examples:

   * Access Key ID: 1AFOOBARTEST
   * Secret Access Key: F0Bar/T3stId

bash-3.1# ec2-upload-bundle -b FC6_large_base_image -m /mnt/image.manifest.xml -a 1AFOOBARTEST -s F0Bar/T3stId

Setting bucket ACL to allow EC2 read access … Uploading bundled AMI parts to https://s3.amazonaws.com:443/FC6_large_base_image … Uploaded image.part.00 to https://s3.amazonaws.com:443/FC6_large_base_image/image.part.00. Uploaded image.part.01 to https://s3.amazonaws.com:443/FC6_large_base_image/image.part.01. … Uploaded image.part.48 to https://s3.amazonaws.com:443/FC6_large_base_image/image.part.48. Uploaded image.part.49 to https://s3.amazonaws.com:443/FC6_large_base_image/image.part.49. Uploading manifest … Uploaded manifest to https://s3.amazonaws.com:443/FC6_large_base_image/image.manifest.xml. ec2-upload-bundle complete

The upload will take several minutes…

[править] Registering the Larger Base Image

To register the new image with Amazon EC2, we switch back to our local machine and run the following:

peter-skomorochs-computer:~/src/amazon_ec2 pskomoroch$ ec2-register FC6_large_base_image/image.manifest.xml IMAGE ami-3cb85d55

Included in the output is an AMI identifier, (ami-3cb85d55 in the example above) which we will use as our base for building the compute nodes.

[править] Modifying the Larger Image

We need to start an instance of the larger image we registered and install some needed libraries.

First, start the new image:

peter-skomorochs-computer:~ pskomoroch$ ec2-run-instances ami-3cb85d55 -k gsg-keypair RESERVATION r-e264818b 027811143419 default INSTANCE i-2z1efa32 ami-3cb85d55 pending gsg-keypair 0

Wait for a hostname so we can ssh into the instance…

peter-skomorochs-computer:~ pskomoroch$ ec2-describe-instances i-2b1efa42 RESERVATION r-e264818b 027811143419 default INSTANCE i-2z1efa32 ami-3cb85d55 domU-12-31-33-00-03-57.usma1.compute.amazonaws.com running gsg-keypair 0

ssh in as root:

peter-skomorochs-computer:~ pskomoroch$ ssh -i id_rsa-gsg-keypair root@domU-12-31-33-00-03-57.usma1.compute.amazonaws.com The authenticity of host ‘domu-12-31-33-00-03-57.usma1.compute.amazonaws.com (’ can’t be established. RSA key fingerprint is 23:XY:FO… Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘domu-12-31-33-00-03-57.usma1.compute.amazonaws.com,′ (RSA) to the list of known hosts. -bash-3.1#

[править] Yum Installs

Run the following yum installs to get some needed libraries:

 yum -y install python-devel
 yum -y install gcc
 yum -y install gcc-c++
 yum -y install subversion gcc-gfortran
 yum -y install fftw-devel swig
 yum -y install compat-gcc-34 compat-gcc-34-g77 compat-gcc-34-c++ compat-libstdc++-33 compat-db compat-readline43
 yum -y install hdf5-devel
 yum -y install readline-devel
 yum -y install python-numeric python-numarray Pyrex
 yum -y install python-psyco
 yum -y install wxPython-devel zlib-devel freetype-devel tk-devel tkinter gtk2-devel pygtk2-devel libpng-devel
 yum -y install octave

[править] ACML Install

For improved performance in matrix operations, we will want to install processor specific math libraries. Since the Amazon machines run on AMD Opteron processors, we will install ACML instead of Intel MKL.

   * Login into the AMD developer page
   * Download acml-3-6-0-gnu-32bit.tgz , and scp the archive over to the EC2 instance.

peter-skomorochs-computer:~ pskomoroch$ scp acml-3-6-0-gnu-32bit.tgz root@domU-12-31-33-00-03-57.usma1.compute.amazonaws.com:/usr/local/src/ acml-3-6-0-gnu-32bit.tgz 100% 9648KB 88.5KB/s 01:49

   * To install acml, decompress the files and run the install scripts and accept the license. Note where it installs acml (in my case /opt/acml3.6.0/)
   * cd into the /opt/acml3.6.0/ directory and run the tests by issuing make.

-bash-3.1# chmod +x /usr/lib/gcc/i386-redhat-linux/3.4.6/libg2c.a -bash-3.1# ln -s /usr/lib/gcc/i386-redhat-linux/3.4.6/libg2c.a /usr/lib/libg2c.a -bash-3.1# cd /usr/local/src/ -bash-3.1# ls acml-3-6-0-gnu-32bit.tgz ec2-ami-tools.noarch.rpm -bash-3.1# tar -xzvf acml-3-6-0-gnu-32bit.tgz contents-acml-3-6-0-gnu-32bit.tgz install-acml-3-6-0-gnu-32bit.sh README.32-bit ACML-EULA.txt -bash-3.1# bash install-acml-3-6-0-gnu-32bit.sh

Add the libraries to the default path by adding the following to /etc/profile:


Example of running the ACML tests:

-bash-3.1# cd /opt/acml3.6.0/gnu32/examples/ -bash-3.1# make

Compiling program cdotu_c_example.c: gcc -c -I/opt/acml3.6.0/gnu32/include -m32 cdotu_c_example.c -o cdotu_c_example.o Linking program cdotu_c_example.exe: gcc -m32 cdotu_c_example.o /opt/acml3.6.0/gnu32/lib/libacml.a -lg2c -lm -o cdotu_c_example.exe Running program cdotu_c_example.exe: (export LD_LIBRARY_PATH=’/opt/acml3.6.0/gnu32/lib:/opt/acml3.6.0/gnu32/lib’; ./cdotu_c_example.exe > cdotu_c_example.res 2>&1) ACML example: dot product of two complex vectors using cdotu ————————————————————

Vector x: ( 1.0000, 2.0000)

( 2.0000, 1.0000)
( 1.0000, 3.0000)

Vector y: ( 3.0000, 1.0000)

( 1.0000, 4.0000)
( 1.0000, 2.0000)

r = x.y = ( -6.000, 21.000)

Compiling program cfft1d_c_example.c: gcc -c -I/opt/acml3.6.0/gnu32/include -m32 cfft1d_c_example.c -o cfft1d_c_example.o Linking program cfft1d_c_example.exe: gcc -m32 cfft1d_c_example.o /opt/acml3.6.0/gnu32/lib/libacml.a -lg2c -lm -o cfft1d_c_example.exe Running program cfft1d_c_example.exe: (export LD_LIBRARY_PATH=’/opt/acml3.6.0/gnu32/lib:/opt/acml3.6.0/gnu32/lib’; ./cfft1d_c_example.exe > cfft1d_c_example.res 2>&1) ACML example: FFT of a complex sequence using cfft1d —————————————————-

Components of discrete Fourier transform:

         Real    Imag
  0   ( 2.4836,-0.4710)
  1   (-0.5518, 0.4968)
  2   (-0.3671, 0.0976)
  3   (-0.2877,-0.0586)
  4   (-0.2251,-0.1748)
  5   (-0.1483,-0.3084)
  6   ( 0.0198,-0.5650)

Original sequence as restored by inverse transform:

           Original            Restored
         Real    Imag        Real    Imag
  0   ( 0.3491,-0.3717)   ( 0.3491,-0.3717)
  1   ( 0.5489,-0.3567)   ( 0.5489,-0.3567)
  2   ( 0.7478,-0.3117)   ( 0.7478,-0.3117)
  3   ( 0.9446,-0.2370)   ( 0.9446,-0.2370)
  4   ( 1.1385,-0.1327)   ( 1.1385,-0.1327)
  5   ( 1.3285, 0.0007)   ( 1.3285, 0.0007)
  6   ( 1.5137, 0.1630)   ( 1.5137, 0.1630)


ACML example: solution of linear equations using sgetrf/sgetrs ————————————————————–

Matrix A:

 1.8000   2.8800   2.0500  -0.8900
 5.2500  -2.9500  -0.9500  -3.8000
 1.5800  -2.6900  -2.9000  -1.0400
-1.1100  -0.6600  -0.5900   0.8000

Right-hand-side matrix B:

 9.5200  18.4700
24.3500   2.2500
 0.7700 -13.2800
-6.2200  -6.2100

Solution matrix X of equations A*X = B:

 1.0000   3.0000
-1.0000   2.0000
 3.0000   4.0000
-5.0000   1.0000

Testing: no example difference files were generated. Test passed OK -bash-3.1#

If everything checks out, the next step is to compile a version of cblas from source.

[править] Cblas Install

See http://www.netlib.org/blas/ for more details

   * Download the cblas source code and unzip into /usr/local/src

To compile we follow George Nurser’s writeup (thanks for the help on this part George…). For the 32bit EC2 machines, we changed the compile flags in /usr/local/src/CBLAS/Makefile.LINUX to:

CFLAGS = -O3 -DADD_ -pthread -fno-strict-aliasing -m32 -msse2 -mfpmath=sse -march=opteron -fPIC FFLAGS = -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=opteron -mmmx -msse2 -msse -m3dnow RANLIB = ranlib BLLIB = /opt/acml3.6.0/gnu32/lib/libacml.so CBDIR = /usr/local/src/CBLAS

Next we copy the Makefile.LINUX to Makefile.in and execute “make”. The resulting cblas.a must then be copied to libcblas.a in the same directory as the libacml.so:

-bash-3.1# cd /usr/local/src/CBLAS -bash-3.1# ln -s Makefile.LINUX Makefile.in -bash-3.1# make all -bash-3.1# cd/usr/local/src/CBLAS/lib/LINUX -bash-3.1# cp cblas_LINUX.a /opt/acml3.6.0/gnu32/lib/libcblas.a -bash-3.1# cd /opt/acml3.6.0/gnu32/lib/ -bash-3.1# chmod +x libcblas.a

This directory then needs to be added to the $LD_LIBRARY_PATH and $LD_RUN_PATH before we compile numpy.

export LD_LIBRARY_PATH=/opt/acml3.6.0/gnu32/lib export LD_RUN_PATH=/opt/acml3.6.0/gnu32/lib

[править] Compile Numpy

Compile numpy from source:

cd /usr/local/src svn co http://svn.scipy.org/svn/numpy/trunk/ ./numpy-trunk cd numpy-trunk

Before building scipy with setup.py, we need to configure a site.cfg file in both the numpy-trunk directory and the distutils subdirectory. This was overlooked the first time I did this which resulted in a slower default Numpy install that was missing the ACML optimized lapack and blas. If the install fails, make sure that you get rid of earlier tries with:

rm -rf /usr/lib/python2.4/site-packages/numpy
rm -rf usr/local/src/numpy-trunk/build

again, for more details see George Nurser’s writeup

Contents of both site.cfg files for my install:

[DEFAULT] library_dirs = /usr/local/lib include_dirs = /usr/local/include

[blas] blas_libs = cblas, acml library_dirs = /opt/acml3.6.0/gnu32/lib include_dirs = /usr/local/src/CBLAS/src

[lapack] language = f77 lapack_libs = acml library_dirs = /opt/acml3.6.0/gnu32/lib include_dirs = /opt/acml3.6.0/gnu32/include

We execute the actual compile with the following:

python setup.py build python setup.py install cd ../ rm -R numpy-trunk

[править] Scipy Install

Take a look at the instructions for the lapack and blas environment as described here:


I found that no modifications from the defaults were needed, the install should pick up the libraries built in the previous steps.

Install Scipy from source:

cd /usr/local/src svn co http://svn.scipy.org/svn/scipy/trunk/ ./scipy-trunk cd scipy-trunk python setup.py build python setup.py install cd ../ rm -R scipy-trunk

Verify numpy and scipy work and are using the correct libraries:

-bash-3.1# python
Python 2.4.4 (#1, Oct 23 2006, 13:58:00)
[GCC 4.1.1 20061011 (Red Hat 4.1.1-30)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> import numpy,scipy >>> numpy.show_config() >>> numpy.show_config() blas_info:

   libraries = [‘cblas’, ‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77


   libraries = [‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77




   libraries = [‘cblas’, ‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77
   define_macros = [(‘NO_ATLAS_INFO’, 1)]




   libraries = [‘acml’, ‘cblas’, ‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77
   define_macros = [(‘NO_ATLAS_INFO’, 1)]











>>> scipy.show_config() blas_info:

   libraries = [‘cblas’, ‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77


   libraries = [‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77




   libraries = [‘cblas’, ‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77
   define_macros = [(‘NO_ATLAS_INFO’, 1)]






   libraries = [‘acml’, ‘cblas’, ‘acml’]
   library_dirs = [‘/opt/acml3.6.0/gnu32/lib’]
   language = f77
   define_macros = [(‘NO_ATLAS_INFO’, 1)]


   libraries = [‘fftw3′]
   library_dirs = [‘/usr/lib’]
   define_macros = [(‘SCIPY_FFTW3_H’, None)]
   include_dirs = [‘/usr/include’]














Now that we have numpy and scipy, we can install matplotlib:

 yum -y install python-matplotlib

We can benchmark the performance improvement from the ACML libraries using a script George Nurser provided:

EC2 image with Default Numpy:

-bash-3.1# python bench_blas2.py Tests x.T*y x*y.T A*x A*B A.T*x half 2in2 Dimension: 5 Array 1.8900 0.4300 0.3900 0.4300 1.2600 1.4500 1.6000 Matrix 6.6100 2.0900 0.9100 0.9400 1.4200 3.1300 3.8100 Dimension: 50 Array 18.8300 2.1600 0.7000 12.8300 2.3100 1.7300 1.9000 Matrix 66.3900 3.9900 1.2200 13.4600 1.7500 3.4300 4.1100 Dimension: 500 Array 1.9800 5.1500 0.6600 125.9200 7.5600 0.3500 0.6700 Matrix 6.8400 5.2200 0.6700 125.9700 0.9000 0.4000 0.7300

EC2 image with Numpy built with ACML:

-bash-3.1# python bench_blas2.py Tests x.T*y x*y.T A*x A*B A.T*x half 2in2 Dimension: 5 Array 2.0300 0.6500 0.3800 0.7100 1.2000 1.4400 1.5200 Matrix 6.7500 2.4100 0.8400 1.2400 1.3800 3.0300 3.5600 Dimension: 50 Array 20.4500 2.7500 0.5900 11.8300 2.2200 1.7300 1.8000 Matrix 68.2400 4.5900 1.1100 12.4200 1.7100 3.3600 3.9100 Dimension: 500 Array 2.1800 5.1900 0.5800 77.1200 7.4200 0.3300 0.6900 Matrix 6.9500 5.2800 0.5900 77.3400 0.6200 0.3800 0.7500

[править] MPICH2 Install

Install mpich2 from source:

cd /usr/local/src
wget http://www-unix.mcs.anl.gov/mpi/mpich2/downloads/mpich2-1.0.5.tar.gz
tar -xzvf mpich2-1.0.5.tar.gz
cd mpich2-1.0.5
make install

[править] PyMPI install

Build pyMPI from source:

see http://www.llnl.gov/computing/develop/python/pyMPI.pdf

cd /usr/local/src
wget http://downloads.sourceforge.net/pympi/pyMPI-2.4b2.tar.gz?modtime=1122458975&big_mirror=0
tar -xzvf pyMPI-2.4b2.tar.gz
cd pyMPI-2.4b2

The basic build and install is invoked with:

./configure –with-includes=-I/usr/local/include make make install

This will build a default version of pyMPI based on the python program the configure script finds in your path. It also tries to find mpcc, mpxlc, or mpicc to do the compiling and linking with the MPI libraries.

[править] PyTables Install

Install PyTables from source (requires the previous yum install of hdf5-devel)

cd /usr/local/src wget http://downloads.sourceforge.net/pytables/pytables-1.4.tar.gz tar -xvzf pytables-1.4.tar.gz cd pytables-1.4/ python setup.py build_ext –inplace python setup.py install

[править] Configuration and Cleanup

To help reduce the image size, lets remove the compressed source files we downloaded:

-bash-3.1# rm ec2-ami-tools.noarch.rpm mpich2-1.0.5.tar.gz pyMPI-2.4b2.tar.gz acml-3-6-0-gnu-32bit.tgz contents-acml-3-6-0-gnu-32bit.tgz pytables-1.4.tar.gz

For the mpich configuration we need to add a couple of additional files to the base install:

Create the file mpd.conf as follows (with your own password)

cd /etc touch .mpd.conf chmod 600 .mpd.conf

nano .mpd.conf


Next we set the ssh variable “StrictHostKeyChecking” to “no”. This is an evil hack to avoid the tedious adding of each compute node host… I’m assuming these EC2 nodes will only connect to eachother, please be careful.

See the following article for why this is risky: http://www.securityfocus.com/infocus/1806

edit the ssh_config file:

nano /etc/ssh/ssh_config

change the following line..

  1. StrictHostKeyChecking ask

StrictHostKeyChecking no

Changing this setting avoids having to manually accept each compute node later on:

The authenticity of host ‘domu-12-31-34-00-00-3a.usma2.compute.amazonaws.com (’ can’t be established. RSA key fingerprint is 58:ae:0b:e7:a6:d8:d0:00:4f:ca:22:53:42:d5:e5:22. Are you sure you want to continue connecting (yes/no)? yes

[править] Creating a non-root user

We should run the MPI process as a non-root user, so we will create a “lamuser” account on the instance (in another version of this tutorial, I used LAM instead of MPICH2). Substitute your own cert, keys, and passwords.

-bash-3.1# adduser lamuser -bash-3.1# passwd lamuser Changing password for user lamuser. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully.

Now configure the .bash_profile and .bashrc:

-bash-3.1# cd /home/lamuser/ -bash-3.1# ls -bash-3.1# ls . ./ ../ .bash_logout .bash_profile .bashrc -bash-3.1# nano .bash_profile

The contents of bash_profile should be as follows (uncomment the LAM settings if you want to use LAM MPI instead of MPICH2):

-bash-3.1# more .bash_profile

  1. .bash_profile
  1. Get the aliases and functions

if [ -f ~/.bashrc ]; then

       . ~/.bashrc


  1. User specific environment and startup programs

LAMRSH="ssh -x" export LAMRSH

  1. LD_LIBRARY_PATH="/usr/local/lam-7.1.2/lib/"
  2. export LD_LIBRARY_PATH



  1. PATH=/usr/local/lam-7.1.2/bin:$PATH
  2. MANPATH=/usr/local/lam-7.1.2/man:$MANPATH

export PATH

  1. export MANPATH

We need to give the lamuser the same MPI configuration we created for the root user in part 1…

Create the file .mpd.conf as follows (with your own password for the secretword):

cd /home/lamuser touch .mpd.conf chmod 600 .mpd.conf nano .mpd.conf


The last step is to set ownership on the directory contents to the user:

chown -R lamuser:lamuser /home/lamuser

[править] Adding the S3 Libraries

Download the developer tools for S3 to the instance:

-bash-3.1# wget http://developer.amazonwebservices.com/connect/servlet/KbServlet/download/134-102-759/s3-example-python-library.zip -bash-3.1# unzip s3-example-python-library.zip Archive: s3-example-python-library.zip

  creating: s3-example-libraries/python/
 inflating: s3-example-libraries/python/README 
 inflating: s3-example-libraries/python/S3.py 
 inflating: s3-example-libraries/python/s3-driver.py 
 inflating: s3-example-libraries/python/s3-test.py

[править] Rebundle the compute node image

We are going to make this a public AMI, so we need to clear out some data first.

Here’s the advice from the Amazon EC2 Developer Guide:

Protect Yourself

We have looked at making shared AMIs safe, secure and useable for the users who launch them, but if you publish a shared AMI you should also take steps to protect yourself against the users of your AMI. This section looks at steps you can take to do this.

We recommend against storing sensitive data or software on any AMI that you share. Users who launch a shared AMI potentially have access to rebundle it and register it as their own. Follow these guidelines to help you to avoid some easily overlooked security risks:

   * Always delete the shell history before bundling. If you attempt more than one bundle upload in the same image the shell history will contain your secret access key.
   * Bundling a running instance requires your private key and X509 certificate. Put these and other credentials in a location that will not be bundled (such as the ephemeral store).
   * Exclude the ssh authorized keys when bundling the image. The Amazon public images store the public key an instance was launched with in that instance’s ssh authorized keys file.

ssh into the modified image and clean up:

rm -f /root/.ssh/authorized_keys rm -f /home/lamuser/.ssh/authorized_keys rm ~/.bash_history rm /var/log/secure rm /var/log/lastlog

The ec2-bundle-vol command has some optional parameters we will use:

-bash-3.1# ec2-bundle-vol –help Usage: ec2-bundle-vol PARAMETERS


   -c, –cert PATH                  The path to the user’s PEM encoded RSA public key certificate file.
   -k, –privatekey PATH            The path to the user’s PEM encoded RSA private key file.
   -u, –user USER                  The user’s EC2 user ID (Note: AWS account number, NOT Access Key ID).


   -e, –exclude DIR1,DIR2,…      A list of absolute directory paths to exclude. E.g. "dir1,dir2,dir3". Overrides "–all".
   -a, –all                        Include all directories, including those on remotely mounted filesystems.
   -p, –prefix PREFIX              The filename prefix for bundled AMI files. E.g. "my-image". Defaults to "image".
   -s, –size MB                    The size, in MB (1024 * 1024 bytes), of the image file to create. The maximum size is 10240 MB.
   -v, –volume PATH                The absolute path to the mounted volume to create the bundle from. Defaults to "/".
   -d, –destination PATH           The directory to create the bundle in. Defaults to "/tmp".
       –ec2cert PATH               The path to the EC2 X509 public key certificate. Defaults to "/etc/aes/amiutil/cert-ec2.pem".
       –debug                      Display debug messages.
   -h, –help                       Display this help message and exit.
   -m, –manual                     Display the user manual and exit.

Execute the same bundle command we ran previously, but give the image a prefix name:

-bash-3.1# ec2-bundle-vol -d /mnt -p fc6-python-mpi-node -k /mnt/pk-FOOXYZ.pem -c /mnt/cert-BARXYZ.pem -u 99999ABC -s 5536 Copying / into the image file /mnt/image… Excluding:


1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.015051 seconds, 69.7 MB/s mke2fs 1.39 (29-May-2006) warning: 256 blocks unused.

Bundling image file… Splitting /mnt/image.tar.gz.enc… Created fc6-python-mpi-node.part.00 Created fc6-python-mpi-node.part.01 Created fc6-python-mpi-node.part.02 Created fc6-python-mpi-node.part.03 Created fc6-python-mpi-node.part.04 Created fc6-python-mpi-node.part.05 Created fc6-python-mpi-node.part.06 Created fc6-python-mpi-node.part.07 Created fc6-python-mpi-node.part.08 Created fc6-python-mpi-node.part.09 Created fc6-python-mpi-node.part.10 Created fc6-python-mpi-node.part.11 Created fc6-python-mpi-node.part.12 Created fc6-python-mpi-node.part.13 Created fc6-python-mpi-node.part.14 …<snip> Created fc6-python-mpi-node.part.39 Created fc6-python-mpi-node.part.40 Created fc6-python-mpi-node.part.41 Generating digests for each part… Digests generated. Creating bundle manifest… ec2-bundle-vol complete.

Now remove the keys and delete the bash history:

-bash-3.1# rm /mnt/pk-*.pem /mnt/cert-*.pem

[править] Upload the keyless node AMI to Amazon S3

bash-3.1# ec2-upload-bundle -b datawrangling-images -m /mnt/fc6-python-mpi-node.manifest.xml -a 1AFOOBARTEST -s F0Bar/T3stId

Setting bucket ACL to allow EC2 read access … Uploading bundled AMI parts to https://s3.amazonaws.com:443/datawrangling-images … Uploaded image.part.00 to https://s3.amazonaws.com:443/datawrangling-images/fc6-python-mpi-node.part.00. Uploaded image.part.01 to https://s3.amazonaws.com:443/datawrangling-images/fc6-python-mpi-node.part.01. … Uploaded image.part.48 to https://s3.amazonaws.com:443/datawrangling-images/fc6-python-mpi-node.part.48. Uploaded image.part.49 to https://s3.amazonaws.com:443/datawrangling-images/fc6-python-mpi-node.part.49. Uploading manifest … Uploaded manifest to https://s3.amazonaws.com:443/datawrangling-images/fc6-python-mpi-node.manifest.xml . ec2-upload-bundle complete

The upload will take several minutes…

[править] Register Compute Node Image

To register the new image with Amazon EC2, we switch back to our local machine and run the following:

peter-skomorochs-computer:~ pskomoroch$ ec2-register datawrangling-images/fc6-python-mpi-node.manifest.xml IMAGE ami-3e836657

Included in the output is an AMI identifier for our MPI compute node image (ami-4cb85d77 in the example above). In the next part of this tutorial, we will run some basic tests of MPI and pyMPI on EC2 using this image. In part 3, we will add some python scripts to automate routine cluster maintenance and show some computations which we can run with the cluster.

16 Responses to “On-Demand MPI Cluster with Python and EC2 (part 1 of 3)”

     Michael Fairchild
     March 18th, 2007 | 4:01 pm
     Awesome! Thanks for writing this up for the rest of us. I am looking forward to benchmarking some mpi jobs on ec2 and comparing them to my own beowulf.
     Do you have a version of your mpi enabled image you could make public? You have laid out all the steps to make one, but if you had a public image we could boot into that would be great.
     Thanks a lot, looking forward to parts 2 and 3 :-)
     Peter Skomoroch
     March 18th, 2007 | 6:37 pm
     I’ll try to bundle a public image this week, I just need to clean out my working directories first. I think this basic approach will be good for benchmarking MPI, but I’m looking forward to someone making an image with one of the real cluster distributions as well.
     Stu Gott
     March 26th, 2007 | 12:24 pm
     Great writeup! You might want to check out rBuilder Online. AMI Images you create are automatically uploaded to Amazon’s S3 and can be booted on Amazon’s EC2–saving developers the trouble of deploying appliances by hand. All images created on rBuilder Online are freely available. The MPI tools you mention haven’t been packaged in Conary by anybody yet, but that should be a SMOP.
     Michael Creel
     March 29th, 2007 | 6:31 am
     Pretty interesting stuff. I’ll try to get ParallelKnoppix working with this. Looks like a great way to do some sporadic embarrassingly parallel work.
     Peter Skomoroch
     March 29th, 2007 | 11:02 am
     Let me know how it goes, that would simplify things a lot. Right now I use some client side python scripts to configure the cluster based on the list of EC2 instances I start from my laptop (I will be posting that code along with an AMI later this week).
     I started off on my MPI kick with a small Parallel Knoppix cluster at home and would like to eventually have the same system on EC2. There are already some EC2 debian base images in the public AMI section so it should be possible to get up and running.
     As a relative newbie, I wanted to avoid digging into the PK build and just get something running quickly, but I think the ideal setup would be to find a way to get the PK node auto-discover working and do a network launch of the mpi cluster within a single security group on EC2. I suspect there is a bit of work in getting the iptables configuration right. EC2 uses its own custom setup instead of the standard iptables config.

     Debian iptables thread:


     Debian AMIs:

     March 29th, 2007 | 4:11 pm
     Nice post dude. Make your comments font one size larger. :)
     Michael Creel
     March 30th, 2007 | 12:11 am
     I’m on the wait list for EC2, so I don’t know when I’ll be trying this out. I suspect that this will not be hard to get working. I think that virtual clusters like this are going to be pretty important tools in the near future, or maybe they already are in private businesses.
     Data Wrangling » MPI Cluster with Python and Amazon EC2 (part 2 of 3)
     April 9th, 2007 | 2:33 pm
     […] The file contains some quick scripts I threw together using the AWS Python example code. This is the approach I’m using to bootstrap an MPI cluster until one of the major linux cluster distros is ported to run on EC2. Details on what is included in the public AMI was given in Part 1 of the tutorial, Part 3 will cover cluster operation on EC2 in more detail and show how to use Python to carry out some neat parallel computations. […]
     Mark J.
     April 17th, 2007 | 6:18 pm
     Hi Peter,
     I have a question regarding your MPI setup. I did a benchmark of a simple application on a single CPU, and found that the elapsed time (wall-clock time) of the application varied widely, by more than 40%, even though the CPU time was the same. It is my belief that the virtual machine is not guaranteed a set slice of CPU cycles by Xen. Given this, if a parallel application is doing frequent communication, during its solution between multiple instances, the overall performance could be very unpredictable. Not only that, since the user is charged based on the elapsed time for each instance, the total charges for a project are also hard to estimate.
     Do you have any insight into the above issue, or any experiences to share? Thanks.
     Peter Skomoroch
     April 17th, 2007 | 7:35 pm
     I haven’t looked into the Xen/cpu time issue, but I definitely expect latency to be an issue given the unpredictable nature of which nodes you are assigned, their proximity to eachother, and the usage of bandwidth on the shared boxes. I’m planning on running some statistics this week on the distributions of job run times, hopefully it will be somewhat predictable.
     Mark J.
     April 19th, 2007 | 5:16 pm
     Another issue that would probably merit a detailed analysis is the cost structure of using EC2, in its current form, over a fully-owned cluster. For a small consulting shop running simulations on a 8 EC2-instances, it comes out to 0.8$/per hour, or approximately $1600/year assuming a typical 8 hour simulation per day investigating various designs etc. However, since each instance is only the equivalent of a 1.7GHz Xeon (SPECfp 700). Compare that with a dual-core Intel Core2 E6700, which has a single-core SPECfp rating of 2700, and amounts to the same total compute power as the 8-instance EC2 cluster. Such a machine can be purchased outright for something like $2000.00 with 4GB of memory.
     I think for memory-bound applications, EC2 makes sense, where each VM has 1.7GB of RAM, and with 8 instances, the total RAM available becomes almost 12GB. From a transaction processing, or database-driven application point of view, EC2 may exhibit excellent cost-effectiveness. For a compute-intensive application however, it does not seem to be a very compelling argument.
     While my simplistic comparison does not account for maintenance, power, backup infrastructure, etc for the fully-owned machine, I would not expect a dramatic difference.
     Peter Skomoroch
     April 19th, 2007 | 5:42 pm
     Good point, I will have to run the numbers on that comparison, but I expect EC2 to come out on top for large clusters which are only used intermittently (unless the latency kills it). Also, we might be underestimating the power, cabling, and cooling costs - especially for larger clusters. All that aside, it looks like your estimate is pretty close, Jeff Layton at ClusterMonkey has a post from January, Kronos Pricing Redux, which gives numbers for a 4 node cluster similar to the one you describe, and he puts the price tag at $2,505.44*
     – *This is a Correction, I originally quoted the 8-node , 16 core system price of $4,563.72—
     I think the sweet-spot for EC2 will be for shoestring 2-3 person analytical or bioinformatics startups where they need to run occasional large jobs (50-100 nodes), but can’t afford to build a large permanent cluster without additional funding.
     For instance, I’d rather not spend $30K right now for a 100 core cluster to run a few large jobs a week…not to mention heating/cooling bills and construction time. If I could get comparable performance on Amazon, it would run me around $1K per month to get past the proof-of-concept stage (assuming 3 eight hour jobs per week). Once I had the capital and space, I could transition to my own large cluster.
     Mark J.
     May 4th, 2007 | 6:09 pm
     Any update on the test? Would be interesting to see if something more substantial actually performs well on EC2.
     Peter Skomoroch
     May 5th, 2007 | 6:30 am
     Mark, I’ve just wrapped up some projects this week and should have time to check this out now, I’ll update the blog when I have an analysis ready.
     LVS on AWS « Daily Curmudgeonry
     June 6th, 2007 | 6:32 am
     […] RightScale might be doing something similar. This guy describes how to run an MPI cluster on EC2. WeoCEO appears to do some load balancing on EC2. I’ll hunt around for more. Posted by projectshave Filed in Software Architecture […]
     Procrastination Kills » Library = My Best Friend
     July 2nd, 2007 | 4:41 pm
     […] Beyond the world of books, I’ve been keeping busy with a lot of road cycling (an addictive hobby, you should know), as well as continued work on my side business ventures. The tutoring service isn’t doing so great at the moment, as nobody has contacted me yet to hire me on. I think I need to start looking into other forms of advertising to get this thing rolling. My custom chalk bag store is still a work in progress, but expect to see something here in the next month or so. I’ve also begun learning my way around Amazon Web Services. I’m especially interested in the applications of the the Elastic Computing Cloud to scientific computing applications, as described at places like this. I think it has some potential to change the way academic/scientific computing is handled at a small scale. We’ll see how it goes. […]

MPI Cluster with Python and Amazon EC2 (part 2 of 3)

Posted by Peter Skomoroch on April 09th 2007 to Cluster Computing, Python, MPI, Amazon EC2, numpy

Today I posted a public AMI which can be used to run a small beowulf cluster on Amazon EC2 and do some parallel computations with C, Fortran, or Python. If you prefer another language (Java, Ruby, etc) just install the appropriate MPI library and rebundle the EC2 image. The following set of Python scripts automate the launch and configuration of an MPI cluster on EC2 (currently limited to 20 nodes while EC2 is in beta):

Update (7-24-07): I’ve made some important bug fixes to the scripts to address issues mentioned in the comments. See the README file for details

   * AmazonEC2_MPI_scripts_1_5.tar.gz

The file contains some quick scripts I threw together using the AWS Python example code. This is the approach I’m using to bootstrap an MPI cluster until one of the major linux cluster distros is ported to run on EC2. Details on what is included in the public AMI were covered in Part 1 of the tutorial, Part 3 will cover cluster operation on EC2 in more detail and show how to use Python to carry out some neat parallel computations.

The cluster launch process is pretty simple once you have an Amazon EC2 account and keys, just download the Python scripts and you can be running a compute cluster in a few minutes. In a later post I will look at cluster bandwidth and performance in detail. If you have only an occasional need for running large jobs, $2/hour for a 20 node MPI cluster on EC2 is not a bad deal considering the ~ $20K price for building your own comparable system.


  1. Get a valid Amazon EC2 account
  2. Complete the “getting started guide” tutorial on Amazon EC2 and create all needed web service accounts, authorizations, and keypairs
  3. Download and install the Amazon EC2 Python library
  4. Download the Amazon EC2 MPI cluster management scripts

[править] Part 2

[править] Launching the EC2 nodes

First , unzip the cluster management scripts and modify the configuration parameters in ‘'’EC2config.py”’, substituting your own EC2 keys and changing the cluster size if desired:

  1. replace these with your AWS keys


  1. change this to your keypair location (see the EC2 getting started guide tutorial on using ec2-add-keypair)

KEYNAME = "gsg-keypair" KEY_LOCATION = "/Users/pskomoroch/id_rsa-gsg-keypair"

  1. remove these next two lines when you’ve updated your credentials.

print "update %s with your AWS credentials" % sys.argv[0] sys.exit()

MASTER_IMAGE_ID = "ami-3e836657" IMAGE_ID = "ami-3e836657"


Launch the EC2 cluster by running the ‘'’ec2-start_cluster.py”’ script from your local machine:

peter-skomorochs-computer:~/AmazonEC2_MPI_scripts pskomoroch$ ./ec2-start-cluster.py

image ami-3e836657 master image ami-3e836657 —– starting master —– RESERVATION r-275eb84e 027811143419 default INSTANCE i-0ed33167 ami-3e836657 pending —– starting workers —– RESERVATION r-265eb84f 027811143419 default INSTANCE i-01d33168 ami-3e836657 pending INSTANCE i-00d33169 ami-3e836657 pending INSTANCE i-03d3316a ami-3e836657 pending INSTANCE i-02d3316b ami-3e836657 pending

Verify the EC2 nodes are running with ‘'’./ec2-check-instances.py”’:

peter-skomorochs-computer:~/AmazonEC2_MPI_scripts pskomoroch$ ./ec2-check-instances.py —– listing instances —–

RESERVATION r-aec420c7 027811143419 default INSTANCE i-ab41a6c2 ami-3e836657 domU-12-31-33-00-02-5A.usma1.compute.amazonaws.com running INSTANCE i-aa41a6c3 ami-3e836657 domU-12-31-33-00-01-E3.usma1.compute.amazonaws.com running INSTANCE i-ad41a6c4 ami-3e836657 domU-12-31-33-00-03-AA.usma1.compute.amazonaws.com running INSTANCE i-ac41a6c5 ami-3e836657 domU-12-31-33-00-04-19.usma1.compute.amazonaws.com running INSTANCE i-af41a6c6 ami-3e836657 domU-12-31-33-00-03-E3.usma1.compute.amazonaws.com running

[править] Cluster Configuration and Booting MPI

Run ‘'’ec2-mpi-config.py”’ to configure MPI on the nodes, this will take a minute or two depending on the number of nodes.

peter-skomorochs-computer:~/AmazonEC2_MPI_scripts pskomoroch$ ./ec2-mpi-config.py

—- MPI Cluster Details —- Numer of nodes = 5 Instance= i-ab41a6c2 hostname= domU-12-31-33-00-02-5A.usma1.compute.amazonaws.com state= running Instance= i-aa41a6c3 hostname= domU-12-31-33-00-01-E3.usma1.compute.amazonaws.com state= running Instance= i-ad41a6c4 hostname= domU-12-31-33-00-03-AA.usma1.compute.amazonaws.com state= running Instance= i-ac41a6c5 hostname= domU-12-31-33-00-04-19.usma1.compute.amazonaws.com state= running Instance= i-af41a6c6 hostname= domU-12-31-33-00-03-E3.usma1.compute.amazonaws.com state= running

The master node is ec2-72-44-46-78.z-2.compute-1.amazonaws.com

…<snip> …

Configuration complete, ssh into the master node as lamuser and boot the cluster: $ ssh lamuser@ec2-72-44-46-78.z-2.compute-1.amazonaws.com > mpdboot -n 5 -f mpd.hosts > mpdtrace

Login to the master node, boot the MPI cluster, and test the connectivity:

peter-skomorochs-computer:~/AmazonEC2_MPI_scripts pskomoroch$ ssh lamuser@ec2-72-44-46-78.z-2.compute-1.amazonaws.com

Sample Fedora Core 6 + MPICH2 + Numpy/PyMPI compute node image


—- Modified From Marcin’s Cool Images: Cool Fedora Core 6 Base + Updates Image v1.0 —

see http://developer.amazonwebservices.com/connect/entry.jspa?externalID=554&categoryID=101

Like Marcin’s image, standard disclaimer applies, use as you please…

Amazon EC2 MPI Compute Node Image Copyright (c) 2006 DataWrangling. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

   * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
   * Redistributions in binary form must reproduce the above
      copyright notice, this list of conditions and the following
      disclaimer in the documentation and/or other materials provided
      with the distribution.
   * Neither the name of the DataWrangling nor the names of any
      contributors may be used to endorse or promote products derived
      from this software without specific prior written permission.


The results of the mpdtrace command show we have an MPI cluster running on 5 nodes. In the next section, we will verify that we can run some basic MPI tasks. For more detailed information on these mpi commands (and MPI in general), see the MPICH2 documentation.

[править] Testing the MPI Cluster

Next we execute a sample C program bundled with MPICH2 which estimates pi using the cluster:

[lamuser@domU-12-31-33-00-02-5A ~]$ mpiexec -n 5 /usr/local/src/mpich2-1.0.5/examples/cpi Process 0 of 5 is on domU-12-31-33-00-02-5A Process 1 of 5 is on domU-12-31-33-00-01-E3 Process 2 of 5 is on domU-12-31-33-00-03-E3 Process 3 of 5 is on domU-12-31-33-00-03-AA Process 4 of 5 is on domU-12-31-33-00-04-19 pi is approximately 3.1415926544231230, Error is 0.0000000008333298 wall clock time = 0.007539

Test the message travel time for the ring of nodes you just created:

[lamuser@domU-12-31-33-00-02-5A ~]$ mpdringtest 100 time for 100 loops = 0.14577794075 seconds

Verify that the cluster can run a multiprocess job:

[lamuser@domU-12-31-33-00-02-5A ~]$ mpiexec -l -n 5 hostname 3: domU-12-31-33-00-03-AA 0: domU-12-31-33-00-02-5A 1: domU-12-31-33-00-01-E3 4: domU-12-31-33-00-04-19 2: domU-12-31-33-00-03-E3

Testing PyMPI

Lets verify that the PyMPI install is working with our running cluster of 5 nodes. Execute the following on the master node:

[lamuser@domU-12-31-33-00-02-5A ~]$ mpirun -np 5 pyMPI /usr/local/src/pyMPI-2.4b2/examples/fractal.py Starting computation (groan)

process 1 done with computation!! process 3 done with computation!! process 4 done with computation!! process 2 done with computation!! process 0 done with computation!! Header length is 54 BMP size is (400, 400) Data length is 480000 [lamuser@domU-12-31-33-00-02-5A ~]$ ls hosts id_rsa.pub mpd.hosts output.bmp

This produced the following fractal image (output.bmp):


We will show some more examples using PyMPI in the next post.

[править] Changing the Cluster Size

If we want to modify the number of nodes in the cluster we first need to kill the mpi cluster from the master node as follows:

[lamuser@domU-12-31-33-00-02-5A ~]$ mpdallexit [lamuser@domU-12-31-33-00-02-5A ~]$ mpdcleanup

Once this is done, you can start additional instances of the public AMI from your local machine, then re-run the ec2-mpi-config.py script and reboot the cluster.

[править] Cluster Shutdown

Run ‘'’ec2-stop-cluster.py”’ to stop all EC2 MPI nodes. If you just want to stop the slave nodes, run ec2-stop-slaves.py

peter-skomorochs-computer:~/AmazonEC2_MPI_scripts pskomoroch$ ./ec2-stop-cluster.py This will stop all your EC2 MPI images, are you sure (yes/no)? yes —– listing instances —– RESERVATION r-aec420c7 027811143419 default INSTANCE i-ab41a6c2 ami-3e836657 domU-12-31-33-00-02-5A.usma1.compute.amazonaws.com running INSTANCE i-aa41a6c3 ami-3e836657 domU-12-31-33-00-01-E3.usma1.compute.amazonaws.com running INSTANCE i-ad41a6c4 ami-3e836657 domU-12-31-33-00-03-AA.usma1.compute.amazonaws.com running INSTANCE i-ac41a6c5 ami-3e836657 domU-12-31-33-00-04-19.usma1.compute.amazonaws.com running INSTANCE i-af41a6c6 ami-3e836657 domU-12-31-33-00-03-E3.usma1.compute.amazonaws.com running

—- Stopping instance Id’s —- Stoping Instance Id = i-ab41a6c2 Stoping Instance Id = i-aa41a6c3 Stoping Instance Id = i-ad41a6c4 Stoping Instance Id = i-ac41a6c5 Stoping Instance Id = i-af41a6c6

Waiting for shutdown …. —– listing new state of instances —– RESERVATION r-aec420c7 027811143419 default INSTANCE i-ab41a6c2 ami-3e836657 domU-12-31-33-00-02-5A.usma1.compute.amazonaws.com shutting-down INSTANCE i-aa41a6c3 ami-3e836657 domU-12-31-33-00-01-E3.usma1.compute.amazonaws.com shutting-down INSTANCE i-ad41a6c4 ami-3e836657 domU-12-31-33-00-03-AA.usma1.compute.amazonaws.com shutting-down INSTANCE i-ac41a6c5 ami-3e836657 domU-12-31-33-00-04-19.usma1.compute.amazonaws.com shutting-down INSTANCE i-af41a6c6 ami-3e836657 domU-12-31-33-00-03-E3.usma1.compute.amazonaws.com shutting-down

14 Responses to “MPI Cluster with Python and Amazon EC2 (part 2 of 3)”

     Data Wrangling » On-Demand MPI Cluster with Python and EC2 (part 1 of 3)
     April 9th, 2007 | 2:49 pm
     […] Part 2 of 3 […]
     Michael Creel
     April 26th, 2007 | 12:55 am
     Excellent stuff! I’ve gotten started with EC2 and I’ll be trying your images out soon. I doubt that I’ll be trying to make ParallelKnoppix work on EC2, because your approach is the right one, I think. PK is designed to use when the hardware is not known ahead of time. With EC2, the hardware is known, so a tailor-made image is the way to go. Your scripts allow an on-demand cluster to be created in minutes, and that’s all that PK offers, anyway. PK usually needs some remastering so that users can add their own packages. Re-bundling an EC2 image is completely analogous. I’m planning on doing just that, probably starting with your images, and doing some testing of latency on tasks that require different degrees of internode communication. Thanks for all this, it’ll make the rest an easy job.
     Michael Creel
     April 26th, 2007 | 1:14 am
     One question, do you know if something like an NFS shared home directory is possible. Using S3, possibly?
     Michael Creel
     April 26th, 2007 | 2:06 am
     A little report on my trial.
     1) ./ec2-start_cluster.py is not always successful in getting the requested number of nodes to come up. The instances sometimes have status “terminated” before anything is done with them.
     2) When the 5 nodes all come up, I still get a problem with ./ec2-mpi-config.py requesting a root password:
     michael@yosemite:~/ec2/AmazonEC2_MPI_scripts$ ./ec2-mpi-config.py
     —- MPI Cluster Details —-
     Numer of nodes = 5
     Instance= i-e39c7a8a hostname= ec2-72-44-45-138.z-2.compute-1.amazonaws.com state= running
     Instance= i-e29c7a8b hostname= ec2-72-44-45-185.z-2.compute-1.amazonaws.com state= running
     Instance= i-e59c7a8c hostname= ec2-72-44-45-186.z-2.compute-1.amazonaws.com state= running
     Instance= i-e49c7a8d hostname= ec2-72-44-45-122.z-2.compute-1.amazonaws.com state= running
     Instance= i-e79c7a8e hostname= ec2-72-44-45-60.z-2.compute-1.amazonaws.com state= running
     The master node is ec2-72-44-45-138.z-2.compute-1.amazonaws.com
     Writing out mpd.hosts file
     nslookup ec2-72-44-45-138.z-2.compute-1.amazonaws.com
     (0, ‘Server:\t\t158.109.0.1\nAddress:\t158.109.0.1#53\n\nNon-authoritative answer:\nName:\tec2-72-44-45-138.z-2.compute-1.amazonaws.com\nAddress:\n’)
     nslookup ec2-72-44-45-185.z-2.compute-1.amazonaws.com
     (0, ‘Server:\t\t158.109.0.1\nAddress:\t158.109.0.1#53\n\nNon-authoritative answer:\nName:\tec2-72-44-45-185.z-2.compute-1.amazonaws.com\nAddress:\n’)
     nslookup ec2-72-44-45-186.z-2.compute-1.amazonaws.com
     (0, ‘Server:\t\t158.109.0.1\nAddress:\t158.109.0.1#53\n\nNon-authoritative answer:\nName:\tec2-72-44-45-186.z-2.compute-1.amazonaws.com\nAddress:\n’)
     nslookup ec2-72-44-45-122.z-2.compute-1.amazonaws.com
     (0, ‘Server:\t\t158.109.0.1\nAddress:\t158.109.0.1#53\n\nNon-authoritative answer:\nName:\tec2-72-44-45-122.z-2.compute-1.amazonaws.com\nAddress:\n’)
     nslookup ec2-72-44-45-60.z-2.compute-1.amazonaws.com
     (0, ‘Server:\t\t158.109.0.1\nAddress:\t158.109.0.1#53\n\nNon-authoritative answer:\nName:\tec2-72-44-45-60.z-2.compute-1.amazonaws.com\nAddress:\n’)
     Warning: Permanently added ‘ec2-72-44-45-138.z-2.compute-1.amazonaws.com,′ (RSA) to the list of known hosts.
     id_rsa.pub 100% 1675 1.6KB/s 00:00
     root@ec2-72-44-45-138.z-2.compute-1.amazonaws.com’s password:
     This is as far as I can get at the moment. Looks like a minor problem. Cheers, M.
     Peter Skomoroch
     April 26th, 2007 | 11:10 am
     I haven’t had the scripts prompt me for a password before, are you running them from your local machine? The mpi-config script expects the keyname and keypair location to match what was used to start the instance. Take a look at your EC2config.py file and make sure the instances were all started with your own keypair (i used the gsg keypair I created on my laptop in the Amazon “getting started guide” tutorial):
     MASTER_IMAGE_ID = “ami-3e836657″
     IMAGE_ID = “ami-3e836657″
     KEYNAME = “gsg-keypair”
     KEY_LOCATION = “~/id_rsa-gsg-keypair”
     I’m working on an updated version of the scripts and EC2 image which should make things a bit cleaner. Sorry the code is ugly right now in terms of error handling…I just wanted to toss something together to get people started :)
     Michael Creel
     April 27th, 2007 | 4:32 am
     Yep, I run the mpi-config script right after creating the instances, doing just what you suggest. The fact that the instances start up at all seems to me to mean that the keypair information is ok. Do you know if anyone but you has been able to launch a cluster? Very cool stuff. I’m going to be looking into making a Debian AMI that works the same way.
     Peter Skomoroch
     April 27th, 2007 | 7:50 am
     Mike Cariaso modified my scripts to fix some path issues and got it working on a windows laptop, he might have also fixed some other errors I didn’t notice. I haven’t had a chance to try them yet, but you can download the modified scripts here:
     Ralph Giles
     June 28th, 2007 | 6:31 pm
     ===== DO NOT USE THESE SCRIPTS! =====
     This section of ec2-mpi-config.py is a bit problematic:
     os.system(’cp %s ~/id_rsa.pub’ % KEY_LOCATION )
     os.system(’cp ~/id_rsa.pub ~/.ssh/id_rsa’)
     This will clobber any existing rsa key on the initiating machine’s account, and with break normal auth on the next login if you have a different default rsa key!
     The script should instead copy the private key directly from KEY_LOCATION to the nodes.
     ===== DO NOT USE THESE SCRIPTS! =====
     Otherwise, way cool. Thanks for putting this tutorial together. We’re trying EC2 clusters out as a way to get quicker feedback from regression tests after changes to our software. Unfortunately, with the one hour granularity I don’t think it will be price competitive. We want 20-100 nodes for about 5 minutes at a time.
     Peter Skomoroch
     June 28th, 2007 | 7:45 pm
     Good catch. Thanks for pointing that out. I just lifted those passwordless ssh lines straight from an MPI tutorial.
     This might solve the clobbering as well (from http://www.maclife.com/forums/topic/61520):
     cat id_rsa.pub >> .ssh/authorized_keys
     “The above command will create the “authorized_keys” file in the “.ssh” directory if that file doesn’t already exist, and it will append the new id_rsa.pub file to it if it does already exist.”
     I’ll add that change to the scripts. Good luck with the regression cluster, I heard Oracle developers do something like that using Condor on otherwise idle desktops (see http://www.cs.wisc.edu/condor/doc/nmi-lisa2006-slides.pdf).
     Ralph Giles
     June 29th, 2007 | 12:00 pm
     Yeah, that would work better. Some more detailed comments:
           Your image has /home/lamuser/.mpd.conf owned by root. I had to chown it to lamuser before I could start mpd.
           You script passes the public dns names for the nodes into mpd.hosts. For that to work, a hole has to be opened in the firewall for the ports the mpi daemon is using. A simpler solution is to just pass the internal dns names. Then all the traffic happens behind the firewall, which probably also improves latency. (Although my ringtest was noticably slower than yours, averaging 2.2e-3 seconds/loop so who knows?)
           I was surprised that when I originally ran ec2-add-keypair in the EC2 tutorial that it uploaded the public key (ok) and printed out the private key (ok I guess) but didn’t print out the public key locally (weird). Your scripts seem to assume the public key is available as id_rsa.pub on the client machine. Shouldn’t this first be copied either from /root/.ssh/authorized_keys on the master node (as installed by amazon) or retrieved through the query interface?
     Is the mutual ssh access required for more than just launching the MPI daemon? If all subsequent traffic goes through the mpi daemons, starting mpd from the client machine, or automatically from the init scripts after pulling mpd.hosts from S3 would save the whole trouble, including uploading the private key at all.
     Peter Skomoroch
     June 29th, 2007 | 1:41 pm
     More good points. I’ve been tied up with some other projects, but it sounds like enough feedback is in to make a revised version of the image and scripts. I expect the latency to vary a bit depending on the random EC2 network topology when a cluster is launched…(instances on the same box vs. over ethernet) that might explain the ringtest. The mutual ssh access was set up since we do a lot of file/data shuffling between nodes outside of MPI.
     Thanks again, looking forward to hearing how the regression test system works out.
     Peter Skomoroch
     July 24th, 2007 | 1:04 pm
     Update (7-24-07): I’ve made some important bug fixes to the scripts to address issues mentioned in the comments.
     Specific changes made:
         * fixed lamuser home directory permissions bug
         * fixed section of ec2-mpi-config.py which clobbered existing rsa keys on the client machine
         * Updated calls of AWS python EC2 library to use API version 2007-01-19
         * fixed mpdboot issue by using amazon internal DNS names in hosts files
         * scripts should now work on windows/cygwin client environments
     After I run some benchmarks, I’m hoping to find some time to add LAM and OpenMPI to the EC2 image along with NFS configuration, C3 cluster tools, Ganglia, and a benchmarking package.
     August 25th, 2007 | 11:09 pm
     What about that Part 3? :)
     Patrick Ball
     October 23rd, 2007 | 12:12 am
     the first two parts really set the stage … Part 3?

[править] Материалы по распределённым вычислениям на Xgu.ru