distributed sparse matrix SPMV(分布式稀疏矩阵的spmv操作)
2015-04-30 05:50
489 查看
分布式vector,不存在各procs上的部分vector通信的问题,即各部分的vector是独立的。而分布式矩阵(dsm),在各procs所持有子矩阵的边界上,需要与邻居procs通信。dsm的通信系数,在mpi_matrix里面给出了。具体到spmv 操作中:
1 通信,得到+ghost矩阵 :矩阵(边界)元素发送/接受
47
for(int i=0; i<num_neighbors; i++)
48 {
49 local_int_t n_recv = receiveLength[i];
50 MPI_Irecv(x_external, n_recv, MPI_DOUBLE, neighbors[i], MPI_COMM_WORLD);//
51 x_external += n_recv;
52 }
53
55
56
/* fill up send buffer */
57
for(local_int_t i=0; i<totalToBeSent; i++) sendBuffer[i] = xv[elementsToSend[i]];
58
59
/* send to each neighbor */
60
for(int i=0; i<num_neighbors; i++)
61 {
62 local_int_t n_send = sendLength[i];
63 MPI_Send(sendBuffer, n_send, MPI_DOUBLE, neighbors[i], MPI_COMM_WORLD);//
64 sendBuffer += n_send;
65 }
2 在local procs 做 spmv
11
const double*
const xv = x.values;
12
double* const yv = y.values;
13
const local_int_t nrow = A.localNumberOfRows;
14
15
for(local_int_t i=0; i<nrow; i++)
16 {
17
double sum = 0.0;
18
const double*
const cur_vals = A.matrixValues[i];
19 const local_int_t*const cur_inds =
A.mtxIndL[i];//update in SetupHalo, there is some index outside of current procs
20
const int cur_nnz = A.nonzerosInRow[i];
21
22
for(int j=0; j<cur_nnz; j++)
23 sum += cur_vals[j] * xv[cur_inds[j]];
24 yv[i] = sum;
25 }
1 通信,得到+ghost矩阵 :矩阵(边界)元素发送/接受
47
for(int i=0; i<num_neighbors; i++)
48 {
49 local_int_t n_recv = receiveLength[i];
50 MPI_Irecv(x_external, n_recv, MPI_DOUBLE, neighbors[i], MPI_COMM_WORLD);//
51 x_external += n_recv;
52 }
53
55
56
/* fill up send buffer */
57
for(local_int_t i=0; i<totalToBeSent; i++) sendBuffer[i] = xv[elementsToSend[i]];
58
59
/* send to each neighbor */
60
for(int i=0; i<num_neighbors; i++)
61 {
62 local_int_t n_send = sendLength[i];
63 MPI_Send(sendBuffer, n_send, MPI_DOUBLE, neighbors[i], MPI_COMM_WORLD);//
64 sendBuffer += n_send;
65 }
2 在local procs 做 spmv
11
const double*
const xv = x.values;
12
double* const yv = y.values;
13
const local_int_t nrow = A.localNumberOfRows;
14
15
for(local_int_t i=0; i<nrow; i++)
16 {
17
double sum = 0.0;
18
const double*
const cur_vals = A.matrixValues[i];
19 const local_int_t*const cur_inds =
A.mtxIndL[i];//update in SetupHalo, there is some index outside of current procs
20
const int cur_nnz = A.nonzerosInRow[i];
21
22
for(int j=0; j<cur_nnz; j++)
23 sum += cur_vals[j] * xv[cur_inds[j]];
24 yv[i] = sum;
25 }
相关文章推荐
- 分布式稀疏矩阵的数据结构(data structure for distributed sparse matrix)
- 稀疏矩阵(sparse matrix)
- 稀疏矩阵的存储格式(Sparse Matrix Storage Formats)
- 稀疏矩阵(sparse matrix)
- 稀疏矩阵的存储格式(Sparse Matrix Storage Formats)
- 数据结构_稀疏矩阵转置(trans_sparse_matrix)
- 稀疏矩阵的存储格式(Sparse Matrix Storage Formats)
- matlab 稀疏矩阵(sparse matrix)
- Sparse Matrix(稀疏矩阵三元组表示,三元组形式的矩阵转置)
- scipy.sparse中csc_martrix和csr_matrix两个稀疏矩阵的区别
- SparseMatrix稀疏矩阵
- 三元组实现稀疏矩阵的压缩存储与转置 (Sparse matrix compression storage and transposition base on triple)
- 论文笔记:Sparse Matrix Format Selection with Multiclass SVM for SpMV on GPU
- 稀疏矩阵(Sparse Matrix)
- 稀疏矩阵的存储格式(Sparse Matrix Storage Formats)
- 稀疏表达:向量、矩阵与张量(上)(转载自http://www.cvchina.info/2010/06/01/sparse-representation-vector-matrix-tensor-1)
- 分布式稀疏矩阵生成小程序
- matlab稀疏矩阵操作问题
- 【Matlab】sparse函数和full函数(稀疏矩阵和非稀疏矩阵转换)
- MATLAB笔记 sparse稀疏矩阵函数