-
-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decoding general linear codes with Groebner bases #11413
Comments
Decoding with Groebner bases |
comment:1
Attachment: report.pdf.gz Here is my implementation. Please review! |
Attachment: decoding_GB.patch.gz |
Reviewer: Burcin Erocal |
comment:2
Thanks for the patch Niels. A little more work is needed to get this merged. Here are a few observations based on reading your code without any familiarity with the algorithm:
|
comment:3
Thanks for reviewing my patch, Bursin! Here is what I have done with your comments:
The changes are in patch number 2. So please apply patch 1 first, then apply patch 2. |
Attachment: decoding_GB_2.patch.gz |
This comment has been minimized.
This comment has been minimized.
comment:9
Patches don't merge. Please rebase. |
comment:11
I had a look at this patch and the algorithm behind it to see if it would be interesting to polish it off and merge it. My conclusion is that I'm not overly excited about the algorithm, and I report my observations for others. It's all about speed, since Sage already ships with two "maximum likelihood"-decoders: "syndrome" and "nearest neighbor". They have complexity roughly The ticket's decoding algorithm needs to solve a Grobner basis problem for each decoding, and hence its theoretical complexity is difficult to judge. We are therefore left with a comparison on how well it does in practice. Both in the original article by Bulygin and Pellikaan, as well as Niels who wrote the code, comparison is made only with syndrome decoding. Moreover they do so only with codes where Indeed, the selling point of the article is codes where Comparing with Nearest Neighbor, I get the following with Sage 8.0:
We can then guess that correcting in a [120, 30] code with "nearest neighbor" would take roughly 2000 seconds. Note, this is decoding completely random vectors, which is much, much worse than what Bulygin--Pellikaan can cope with (for e.g. [120, 20] this often corrects 30-35 errors). However, the Bulygin--Pellikaan method does appear to be faster when the number of errors is small. Best, |
comment:12
Note that after #20138 this decoder should also consider how it matches up to Information-set decoding, which actually looks much more like the proposed Gröbner basis decoder than the aforementioned ML-decoders. At time of writing, this information-set decoder is much, much faster than the proposed decoder:
|
I have implemented a decoding method for general linear codes in Sage. The method decodes up to half the true minimum distance using Groebner bases. This method was introduced by Bulygin and Pellikaan.
I was expecting the method to be faster than syndrome decoding, but it appears to be equally fast. It may be worth having this method around in Sage since Groebner basis computation may become faster. I have attached a report with my findings.
Apply:
CC: @sagetrac-sbulygin @kwankyu
Component: coding theory
Keywords: general decoding groebner basis
Author: Niels Duif
Reviewer: Burcin Erocal
Issue created by migration from https://trac.sagemath.org/ticket/11413
The text was updated successfully, but these errors were encountered: