Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: extend intrinsic matmul #951

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open

Conversation

wassup05
Copy link
Contributor

This PR attempts to address #931

interface stdlib_matmul is created and extended to handle 3-5 matrices. (works for integer, real, complex)

API

A = stdlib_matmul(B, C, D, E, F)
A = stdlib_matmul(B, C, D, E)
A = stdlib_matmul(B, C, D)

The Algorithm for optimal parenthesization is as given in "Introduction to Algorithms" by Cormen, ed4, ch14, section-2.
numpy's linalg.multidot also uses the same algorithm.

Although as @jalvesz had suggested I should have used gemm for the multiplication of component matrices this uses matmul for now, I can make this if deemed appropriate once the major implementation has been given a green signal.

I have added a very basic example to play around with, and I will be adding the detailed docs, specs and tests once everybody approves of the major implementation.

I am not really happy with some parts of the code like computing the size of all the matrices individually, If anyone has any suggestions regarding that or any cleaner way of implementing some other stuff (perhaps some fypp magic) please do let me know

Notes

  • I think another interesting enhancement would be, if the first array is 1-D, then treat it as a row vector. If the last error is 1-D treat it as a column vector just as numpy does it.

Comment on lines 120 to 126
f = matmul(a, stdlib_matmul(b, c, d, e))
case (2)
f = matmul(matmul(a, b), stdlib_matmul(c, d, e))
case (3)
f = matmul(stdlib_matmul(a, b ,c), matmul(d, e))
case (4)
f = matmul(stdlib_matmul(a, b, c, d), e)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just noticed this part is quite inefficient, because it calls tostdlib_matmul with 3, 4 arguments and that in turn computes the whole s(i,j) again which is a costly operation (O(n^3))... Something cleaner is required here instead of the alternate 3 level nesting of select statements.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this PR @wassup05. Wherever it is possible, I believe it would be useful to call the gemm backend for lapack. This would allow better performance when one can link against an external optimize BLAS/LAPACK library.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @perazz for the review, I tried doing that and it seems like gemm is not defined for integer type arrays... Is that so? Or am I missing something? And also I noticed that trying to integrate with gemm introduces a lot of verboseness, Is that how it is?

Copy link
Member

@perazz perazz Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I am not sure if such API would have wide usage for integer kinds. So as far as I'm concerned, it could deal with real and complex matrices only. Regarding the gemm backend, yes I believe it is important because it is a staple of the whole linear algebra library: that it provides a clean, high-level API, that the user can build with external high-performance BLAS/LAPACK libraryes. Relying on gfortran to wrap matmul to an external library is not being supported by the stdlib build process.

@jalvesz
Copy link
Contributor

jalvesz commented Mar 14, 2025

Here a few ideas for consideration:

  1. Regarding the use of gemm or plain matmult: both gfortran and intel (ifort/ifx) enable replacing the intrinsic matmul by gemm using flags: worth reading https://stackoverflow.com/questions/31494176/will-fortrans-matmul-make-use-of-mkl-if-i-include-the-library and https://community.intel.com/t5/Intel-Fortran-Compiler/qopt-matmul-with-mkl-sequential/td-p/1003110 so it might not be that harmful after all to implement using matmul.
  2. Regarding the signature of the key functions, you could consider:
pure function matmul_chain_order(p) result(s)
    integer, intent(in) :: p(:)
    integer :: s(1:size(p)-1, 2:size(p))
    integer :: m(1:size(p), 1:size(p))
    integer :: l, i, j, k, q, n
    n = size(p)
    ...
end function matmul_chain_order

pure module function stdlib_matmul (m1, m2, m3, m4, m5) result(e)
    real, intent(in) :: m1(:,:), m2(:,:)
    real, intent(in), optional :: m3(:,:), m4(:,:), m5(:,:) !> from the 3rd matrix they can be all optional
    real, allocatable :: e(:,:)
    integer :: p(5), i, num_present
    integer :: s(3,2:4)

    p(1) = size(m1, 1)
    p(2) = size(m2, 1)
    num_present = 2
    if(present(m3)) then
        p(3) = size(m3, 1)
        num_present = num_present + 1
    end if
    if(present(m4)) then
        p(4) = size(m4, 1)
        num_present = num_present + 1
    end if
    if(present(m5)) then
        p(5) = size(m5, 2)
        num_present = num_present + 1
    end if

    s = matmul_chain_order(p(1:num_present))

    ... 
end function stdlib_matmul

For the procedure computing the orders you only need the p array, the dimension can be obtained. One might argue that this induces extra time. I think it will most likely be transparent.

For the main procedure, you could consider using optional arguments from the 3rd matrix onwards and then manage the rest within the same procedure without recursion.

Regarding the example, I would consider much more interesting to set an example a non-trivial example: the actual ordering being different to the naive sequence.

Just some food-for-thought.

@wassup05
Copy link
Contributor Author

wassup05 commented Mar 14, 2025

Thank you very much @jalvesz for your detailed remarks, they are quite helpful.
I have refactored the algorithm to only accept p, as n can very well be calculated from that.

Regarding the signature I am a bit confused, because if we don't use recursion we would have to handle 14 cases (4th catalan number) if the number of matrices is 5, which would become quite messy quickly. For 3 matrices it is much efficient to compare the two ordering costs and dispatch an ordering appropriately (numpy claims that it is 15 more efficient than calculating a s matrix for it), 4 matrices case is also somewhat fine considering there are only 5 cases, But I am not sure about how to handle the 5 matrix one without handling all the cases explicitly.. (Or maybe we can just limit this to 4 matrices?) maybe I am missing something here.

And yes I will add some non-trivial examples for sure.

Looking forward to hearing your thoughts.

@jalvesz
Copy link
Contributor

jalvesz commented Mar 14, 2025

Indeed! another idea:

For 3 and 4 matrices make them internal procedures including passing as extra argument the ordering slice corresponding to those matrices. For the one public procedure with 2+3optionals, you compute just once the ordering and call the internal versions for 3 and 4 depending on the case.

@wassup05
Copy link
Contributor Author

Thank you @jalvesz, I have implemented them accordingly and added some better examples although I am not sure of how I may test the correctness of multiplication with large matrices or compare the time taken by native matmul vs this..

@jalvesz
Copy link
Contributor

jalvesz commented Mar 17, 2025

In terms of correctness I would say that the simplest approach would be to write down a couple of analytical cases for which the exact solution is fixed, also a couple of cases with random matrices for which the optimal ordering is different from the trivial one and compare: stdlib_matmul vs trivial sequential calls to intrinsic matmul. For integer arrays the error should be zero, for real/complex arrays the error tolerance should be somewhere around 100*epsilon(0._wp). The test could be done using the Frobenius norm error = mnorm(A-A_ref), where A would result from the current proposal and A_ref from the intrinsic matmul using the naive sequence.

In terms of performance, It might not be necessary to add that to the PR but, a separate small program showcasing two scenarios might be useful:

  1. Many multiple calls for stdlib_matmul vs naive matmul for small matrices
  2. Few multiple calls for stdlib_matmul vs naive matmul for medium/large matrices

@loiseaujc maybe you have some use cases worth testing?

@loiseaujc
Copy link

Oh boy ! I had not seen this PR. Super happy with it. I'll take some time this afternoon to look at the code and give you some feedback. As for the test cases, anything involving multiplications with a low-rank matrix already factored would do. As far as I'm concerned, these are the typical cases I encounter where such an interface would prove very useful syntactically. I'll see as I get to the lab if I have a sufficiently simple yet somewhat realistic set of data we could use for testing the performances.

@wassup05
Copy link
Contributor Author

I have replaced all the matmul's by calls to gemm... The code has become a bit complicated and verbose... If I have missed anything or there was a shorter way of doing the same, do let me know please.
And also now the interface handles only real and complex values.

@jalvesz @perazz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants