8월, 2018의 게시물 표시

[MPI] Loop Statement

반복문은 프로그래밍에서 자주 쓰이는 명령문 중 하나입니다. 오늘은 반복문을 MPI에서 병렬화 하는 방법에 대해서 다룹니다. MPI에는 병렬 반복문이 없지만, 프로세서 ID를 이용하면 간단하게 병령화 할 수 있습니다. 이를 위해서, MPI_COMM_RANK를 통해서 식별 가능하도록 프로세서 ID를 각각의 프로세서에 지정해줍니다. 이후 반복문을 다음과 같이 설정해 주면 병렬화가 가능합니다. program main use mpi integer :: ierr ! error signal variable. Standard value = 0 integer :: rank ! the process ID/NUMBER integer :: nprocs ! number of processes call MPI_INIT(ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) do i = rank + 1, 11, nprocs     print *, i, rank end do end program main 프로세서의 수가 4개인 컴퓨터에서 실행시키면 각각 2~3개의 프린트 명령어를 수행 할 것입니다. 순서는 지켜지지 않으며, 컴파일하고 실행시키면 다음과 같은 결과물을 얻을 수 있습니다. >>mpif90 mpitest.f90 >>mpiexec mpitest.exe            1           0            5           0            9           0            2           1            3           2            4           3            8           3            7           2           11          2            6           1  

[MPI] MPI_Scatter, MPI_Gather

이미지
Gather Purpose: If an array is scattered across all processes in the group and one wants to collect each piece of the array into a specified array on a single process, the call to use is GATHER. Scatter Purpose: On the other hand, if one wants to distribute the data into n segments, where the ith segment is sent to the ith process in the group which has n processes, use SCATTER. Think of it as the inverse to GATHER. We will first consider the basic form of these MPI gather/scatter operations, in which the number of data items collected from or sent to processes is the same for all processes, and the data items are arranged contiguously in order of process rank. The syntax is given below: C int MPI_Gather(const void* sbuf, int scount, \ MPI_Datatype stype, void* rbuf, int rcount, \ MPI_Datatype rtype, int root

[MPI] Collective communication - MPI_Bcast

· One to all : MPI_Bcast, MPI_Scater · All to one : MPI_Reduce, MPI_Gather · All to all : MPI_Alltoall - MPI_Bcast : Broadcasts a message from "root" process to all other process in the same communicator. - C/C++ : MPI_Bcast( array, 100, MPI_INT, 3, comm);  - Fortran : call MPI_Bcast( array, 100, MPI_INTEGER, 3, comm,ierr)  - Example Source Code program hello     use mpi     integer :: f(10)     integer :: status(MPI_STATUS_SIZE)     integer :: rank, src=0, dest=1, ierr, i     call mpi_init(ierr)      call MPI_Comm_rank( MPI_COMM_WORLD, rank,ierr);     !array preparation from src processor     if (rank == src) then           do i = 1, 10             f(i) = f(i) + i         end do     end if     !broad cast integer array     call MPI_Bcast(f, 10, MPI_INT, src, MPI_COMM_WORLD, ierr)     print *, f     call mpi_finalize(ierr) end program hello -Output C>>mpiexec -n 3 mpibcast.exe            1           2           3