Matrix Multiplication on Multiple Processors: MPI4PY

Afzal Badshah, PhD
3 min readMay 9, 2024

When dealing with large-scale matrix multiplication, distributing the workload among multiple processors can significantly speed up the computation. This is particularly beneficial when the matrices are large and the computation involves a substantial number of operations. Visit the detailed tutorial on MPI in Python here.

In this scenario, each processor handles a portion of the matrices, performing computations independently, and then the results are combined to obtain the final result. This parallelization technique leverages the capabilities of multiple processors to expedite the overall computation time.

Code:

from mpi4py import MPI
import numpy as np
# Function to perform matrix multiplication
def matrix_multiply(A, B):
C = np.zeros((A.shape[0], B.shape[1]))
for i in range(A.shape[0]):
for j in range(B.shape[1]):
for k in range(A.shape[1]):
C[i][j] += A[i][k] * B[k][j]
return C
# Initialize MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
# Master process
if rank == 0:
# Generate matrices A and B
A = np.random.rand(2, 2)
B = np.random.rand(2, 2)

# Split matrices for distribution
chunk_size = A.shape[0] // size
A_chunks = [A[i:i+chunk_size] for i in range(0, A.shape[0], chunk_size)]

# Send parts of A and B to worker processes
for i in range(1, size):
comm.send(A_chunks[i-1], dest=i, tag=1)
comm.send(B, dest=i, tag=2)

# Calculate its own part of multiplication
C_partial = matrix_multiply(A_chunks[0], B)

# Collect results from worker processes
for i in range(1, size):
C_partial += comm.recv(source=i, tag=3)

# Print the resulting matrix
print("Resulting matrix C:")
print(C_partial)
# Worker processes
else:
# Receive matrix chunks from master
A_chunk = comm.recv(source=0, tag=1)
B = comm.recv(source=0, tag=2)

# Perform multiplication
C_partial = matrix_multiply(A_chunk, B)

# Send back the result to master
comm.send(C_partial, dest=0, tag=3)

Explanation

Import MPI Module and Initialize MPI Environment

from mpi4py import MPI

This line imports the MPI module from the mpi4py package, enabling the use of MPI functionalities.

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

These lines initialize the MPI environment. MPI.COMM_WORLD creates a communicator object representing all processes in the MPI world. comm.Get_rank() returns the rank of the current process in the communicator, and comm.Get_size() returns the total number of processes in the communicator.

Function to Perform Matrix Multiplication

def matrix_multiply(A, B):
C = np.zeros((A.shape[0], B.shape[1]))
for i in range(A.shape[0]):
for j in range(B.shape[1]):
for k in range(A.shape[1]):
C[i][j] += A[i][k] * B[k][j]
return C

This function matrix_multiply takes two matrices A and B as input and returns their multiplication C. It initializes an empty matrix C with dimensions derived from the multiplication of matrices A and B. Then, it performs matrix multiplication using nested loops to iterate through rows and columns of matrices A and B, computing each element of matrix C.

Master Process

if rank == 0:
A = np.random.rand(2, 2)
B = np.random.rand(2, 2)

In the master process (rank 0), random matrices A and B of size 2x2 are generated.

chunk_size = A.shape[0] // size
A_chunks = [A[i:i+chunk_size] for i in range(0, A.shape[0], chunk_size)]

The matrices A is split into chunks based on the total number of processes (size). Each chunk is of size chunk_size, and the list A_chunks contains these chunks.

for i in range(1, size):
comm.send(A_chunks[i-1], dest=i, tag=1)
comm.send(B, dest=i, tag=2)

Parts of matrices A and B are sent to worker processes using comm.send(). Each chunk of A along with the entire matrix B is sent to a different worker process.

C_partial = matrix_multiply(A_chunks[0], B)

The master process calculates its own partial result of matrix multiplication using the first chunk of matrix A.

for i in range(1, size):
C_partial += comm.recv(source=i, tag=3)

The master process receives partial results from each worker process using comm.recv(), aggregates them, and stores the final result in C_partial.

print("Resulting matrix C:")
print(C_partial)

Finally, the resulting matrix C is printed.

Worker Processes

else:
A_chunk = comm.recv(source=0, tag=1)
B = comm.recv(source=0, tag=2)

In the worker processes (rank != 0), chunks of matrix A and entire matrix B are received from the master process using comm.recv().

C_partial = matrix_multiply(A_chunk, B)

Each worker process performs matrix multiplication using its received chunk of matrix A and matrix B.

comm.send(C_partial, dest=0, tag=3)

The resulting partial matrix C_partial is sent back to the master process using comm.send().

Material

Download the programs (code), covering the MPI4Py.

--

--

Afzal Badshah, PhD

Dr Afzal Badshah focuses on academic skills, pedagogy (teaching skills) and life skills.