Documentation‎ > ‎

5. Matrix calculations

Introduction

TIPL also includes a generic matrix operation (GMO) library, which is a C++ template library for generic matrix operation. The library is designed under a generic programming paradigm similar to that of STL generic algorithms, which accept any iterator as the input type. Unlike most of the other matrix library, GMO has very few assumption regarding the input data structure, and the major benefit is that it can work seamlessly with all kinds of random access output iterator, including array allocated in stack, heap, or std::vector, ..., and even some of bidirectional iterators, such as std::ostream_iterator. If the matrix data is stored in your custom designed class or memory space, you may find GMO useful. This feature reduces the computation time for copying the data to and from different storage type, potentially achieving higher performance

     The computation efficiency (using matrix multiplication as a test) is slightly better than uBLAS, as shown in the following figure. 
This test was conducted using the following codes:
using namespace math;
using namespace boost::numeric;
ublas::matrix<double> mA_,mB_,mC_;
for (size_t m_size = 5;m_size <= 500;m_size += 5)
{
    size_t sum_gmo = 0;
    size_t sum_ublas = 0;
    size_t test_iteration = 250000000/m_size/m_size/m_size;
    mA_.resize(m_size,m_size);
    mB_.resize(m_size,m_size);
    mC_.resize(m_size,m_size);
            
    {
        for (size_t j = 0;j < mA_.data().size();++j)
            mA_.data()[j] = std::rand();

        for (size_t j = 0;j < mB_.data().size();++j)
            mB_.data()[j] = std::rand();
            
        size_t now_time;
        // gmo computation
        now_time = std::clock();
        for (size_t i = 0;i < test_iteration;++i)
            tipl::matrix::product(mA_.data().begin(),mB_.data().begin(),mC_.data().begin(),
                                   tipl::dyndim(m_size,m_size),tipl::dyndim(m_size,m_size));

        sum_gmo += std::clock()-now_time;

        // ublas computation
        now_time = std::clock();
        for (size_t i = 0;i < test_iteration;++i)
            mC_ = ublas::prod(mA_,mB_);

        sum_ublas += std::clock()-now_time;
    }
            
    size_t total_lop = m_size*m_size;
    std::cout << "The GMO: " << ((double)total_lop)/((double)sum_gmo) << " uBLAS: " << ((double)total_lop)/((double)sum_ublas) << std::endl;
    
}

Why reinventing the wheel? Because the current wheel is not doing good enough...

I have tested several libraries for C++, including uBLAS, TNT, and MTL. They work great and have been widely used in all kind of fields, but they failed to fulfill an important generic requirement: accept different types of matrix structures. I would like to use iterators or array point directly without a lot of memory allocation or copying. The users don't have to copy the data to specified matrix class to perform matrix operation.

void fun(void)
 {
         // 3-by-3 matrix 
     double A[9] = {1,2,3,
                    4,5,6,
                    7,8,9}; 

     // 3-by-4 matrix
     float B[12] = {1,2,3,4,
                    5,6,7,8,
                    9,10,11,12};

     std::vector<double> result(12); // used as the output

     std::ofstream out("matrix_multiply_result.txt"); // used as the output
         
     // perform result=A*B
     tipl::matrix::product(A,B,result.begin(), tipl::shape<2>(3,3), tipl::shape<2>(3,4));

     // perform out=A*B
     tipl::matrix::product(A,B,std::ostream_iterator<double>(out, " " ), tipl::shape<2>(3,3), tipl::shape<2>(3,3));

 }

Design Paradigm

Paradigm 1.
The algorithm is iterator-based and accept any kind of random assess iterator. The only assumption is that the elements of a matrix is stored in row major  i.e  

// 3-by-3 matrix
float B[9] = {1,2,3,
              4,5,6,
              7,8,9};

For some matrix operation, the supported iterators can be extended to bidirectional iterator, thereby providing more generic operation.

Paradigm 2. 
The dimension of the matrix is input from the parameters, and the algorithm should be optimized if the dimension of the matrix is known at compiler time.

For example, if the user assigns a matrix with its size known at compiler time, the algorithm should provide a specialized function that achieve better efficiency. 

// since the matrix size is known at compiler time, the operation can be optimized 
product(input_iterator A,input_iterator B,output_iterator C, dim<2,2>(), dim<2,2>())
{
     ...operations specialized for matrix with size of 2 by 2
}

// matrix size is dynamic, and more generic matrix multiplication is needed here 
product(input_iterator A,input_iterator B,output_iterator C, ...)
{
    ...more generic operation for dynamic case
}
     
Paradigm 3. All function is template-based, no need to pre-compile any cpp file.


Example

Use iterator as the output

using namespace math;

double A[9] = {1,2,3,
               4,5,6,
               7,8,9};
float B[9] = {1,2,3,
              4,5,6,
              7,8,9};

std::ofstream out("matrix_multiply_result.txt");

tipl::matrix::product(A,B,std::ostream_iterator<double>(out, " " ), tipl::shape<2>(3,3), tipl::shape<2>(3,3));

LU decomposition

using namespace math;
double A[]={8, 1, 3,
            7, 0, 2,
            12, 3, 2};
size_t pivot[3];
tipl::matrix::lu_decomposition(A,pivot,tipl::shape<2>(3,3));


Matrix Inverse


double A[]={12, 8,
            9, 4};
tipl::matrix::inverse(A,tipl::shape<2>(2,2));

Eigendecomposition

   The input and output matrix are both row-major.

double A[]={8, 1, 3,
            7, 0, 2,
            12, 3, 2};
double V[9],d[3];

tipl::matrix::eigen_decomposition_sym(A,V,d,tipl::shape<2>(3,3));

tipl::matrix::eigenvalue(A,V,d,tipl::shape<2>(3,3));


SVD


*A must be a n-by-m matrix satisfying n <= m

1.Singular values only 

double A[]={8, 1, 3, 4,
            7, 0, 2, 5,
            12, 3, 2, 6};
double s[3];
tipl::matrix::svd(A,s,tipl::shape<2>(3,4));
        
2.Singular values and singular vectors 

double A[]={8, 1, 3, 4,
            7, 0, 2, 5,
            12, 3, 2, 6};
double U[9];
double s[3];
tipl::matrix::svd(A,V,s,tipl::shape<2>(3,4));
// A stores 3 right singular vectors and U stores 3 left singular vectors


Comments