您的位置:首页 > 产品设计 > 产品经理

distributed sparse matrix SPMV(分布式稀疏矩阵的spmv操作)

2015-04-30 05:50 489 查看
分布式vector,不存在各procs上的部分vector通信的问题,即各部分的vector是独立的。而分布式矩阵(dsm),在各procs所持有子矩阵的边界上,需要与邻居procs通信。dsm的通信系数,在mpi_matrix里面给出了。具体到spmv 操作中:

1 通信,得到+ghost矩阵 :矩阵(边界)元素发送/接受

47
for(int i=0; i<num_neighbors; i++)
48 {
49 local_int_t n_recv = receiveLength[i];
50 MPI_Irecv(x_external, n_recv, MPI_DOUBLE, neighbors[i], MPI_COMM_WORLD);//
51 x_external += n_recv;
52 }

53
55

56
/* fill up send buffer */
57
for(local_int_t i=0; i<totalToBeSent; i++) sendBuffer[i] = xv[elementsToSend[i]];

58

59
/* send to each neighbor */
60
for(int i=0; i<num_neighbors; i++)
61 {
62 local_int_t n_send = sendLength[i];
63 MPI_Send(sendBuffer, n_send, MPI_DOUBLE, neighbors[i], MPI_COMM_WORLD);//
64 sendBuffer += n_send;
65 }

2 在local procs 做 spmv

11
const double*
const xv = x.values;
12
double* const yv = y.values;
13
const local_int_t nrow = A.localNumberOfRows;

14
15
for(local_int_t i=0; i<nrow; i++)
16 {
17
double sum = 0.0;
18
const double*
const cur_vals = A.matrixValues[i];

19 const local_int_t*const cur_inds =
A.mtxIndL[i];//update in SetupHalo, there is some index outside of current procs
20
const int cur_nnz = A.nonzerosInRow[i];

21
22
for(int j=0; j<cur_nnz; j++)
23 sum += cur_vals[j] * xv[cur_inds[j]];
24 yv[i] = sum;
25 }
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: