-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of Linear Algebra operations of AbstractTriangular Sparse Matrices #28451
Comments
These cases would also be easy to detect if we could make scalar indexing fail during tests as mentioned in JuliaLang/LinearAlgebra.jl#545 |
I agree, if the objective is to avoid inefficient calls Nevertheless, I would like to not to be forced to use ugly workarounds for those cases, but rather just use for example Please wait a day or so for my next PR to resolve that. |
The figures have drastically improved by the change in the last commit:
|
The recent version improved runtimes another time:
|
…These should go in another PR so this one can be merged more quickly. Revert "added sparse multiplication and division for triangular matrices. Fix JuliaLang#28451" This reverts commit 11c1d1d.
…ces. Fix JuliaLang#28451" This reverts commit 11c1d1d.
* added sparse multiplication and division for triangular matrices. Fix #28451 * merge with master * merge with master 2 * fixed symtridiagonal + bidiagonal * improved find diagonal part * refactored to purge name space of SparseArrays * additional test cases and bug fix * specializing some structured matrix operations * added constructors for Triangular(::Diagonal). Removed redundant code from binops of special.jl so that broadcasting takes over. Cleaned up some of the tests for special.jl * fix whitespace * actually fixed whitespace * fixed a typo in Diagonal*Bi/Tridiag. Changed the multiplication methods to more explicit constructors so that matrices with BigFloat dont error * fixed bidiag+/-diag speed regression * fixed +/- regressions for the other structured matrix types (bidiag, tridiag, symtridiag, diag) * Revert "merged with master" This reverts commit 3a58908, reversing changes made to 0facd1d. * Removing the speedups for sparse matrix multiplication and division. These should go in another PR so this one can be merged more quickly. Revert "added sparse multiplication and division for triangular matrices. Fix #28451" This reverts commit 11c1d1d. * Revert "additional test cases and bug fix" This reverts commit 21592db. * reverting sparse changes * removing extra whitespace and comments * fixing BiTriSym*BiTriSym sparse eltype * fixing the cases where we have two structured matrices and the resulting diagonals are of different types. This still fails when the representation is a range and we get a step size of 0 * Fixes the issue where we try to add structured matrices and one has an eltype <: AbstractArray See PR 27289 * remove adjoint and transpose methods that I never changed * fixing tridiagonal constructor to save time/memory * fixing bidiag * diag return type * adding multiplication to binops tests
I looked with surprise at the benchmark results of left multiplication/division of some wrapped matrix types.
In the example all times could be as low as 1.7ms, because the algorithms for multiplication and division is rather similar for triangular matrices.
Actually, I observed: 1.6s for some multiplications (factor 1000) and 500ms for some multiplications and divisions (factor 300).
Only 6 of 24 cases had the expected 1-2ms (partially as a consequence of #28242).
I want to investigate the multiplication and division methods in order to avoid the fallback methods for
LinearAlgebra.AbstractTriangular
in the sparse case.It is possible, that new multiplication algorithms analogous to the triangular solvers have to be coded.
The types
Unit...Triangular
also deserve special handling for*
and\
.The text was updated successfully, but these errors were encountered: