Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

readdlm test failure on 64-bit Fedora with LLVM 3.4 #6757

Closed
nalimilan opened this issue May 5, 2014 · 76 comments
Closed

readdlm test failure on 64-bit Fedora with LLVM 3.4 #6757

nalimilan opened this issue May 5, 2014 · 76 comments
Assignees
Labels
upstream The issue is with an upstream dependency, e.g. LLVM
Milestone

Comments

@nalimilan
Copy link
Member

I see this failure when running tests in my RPM package in 64-bits on a Fedora build machine. This is with git master as of today. Funnily the test passes on my machine; is it because it's been fixed in the recent hours?

cd test
make all
Warning: git information unavailable; versioning information limited
    �[32;1mJULIA�[0m �[37;1mtest/all�[0m
    From worker 2:       �[1m*�[0m �[31mlinalg1�[0m
    From worker 4:       �[1m*�[0m �[31mlinalg3�[0m
    From worker 3:       �[1m*�[0m �[31mlinalg2�[0m
    From worker 4:       �[1m*�[0m �[31mcore�[0m
    From worker 4:       �[1m*�[0m �[31mkeywordargs�[0m
    From worker 4:       �[1m*�[0m �[31mnumbers�[0m
    From worker 4:       �[1m*�[0m �[31mstrings�[0m
    From worker 4:       �[1m*�[0m �[31mcollections�[0m
    From worker 4:       �[1m*�[0m �[31mhashing�[0m
    From worker 4:       �[1m*�[0m �[31mremote�[0m
    From worker 4:       �[1m*�[0m �[31miobuffer�[0m
    From worker 4:       �[1m*�[0m �[31marrayops�[0m
    From worker 2:       �[1m*�[0m �[31msimdloop�[0m
    From worker 2:       �[1m*�[0m �[31mblas�[0m
    From worker 2:       �[1m*�[0m �[31mfft�[0m
    From worker 2:       �[1m*�[0m �[31mdsp�[0m
    From worker 2:       �[1m*�[0m �[31msparse�[0m
    From worker 2:       �[1m*�[0m �[31mbitarray�[0m
    From worker 4:       �[1m*�[0m �[31mrandom�[0m
    From worker 4:       �[1m*�[0m �[31mmath�[0m
    From worker 4:       �[1m*�[0m �[31mfunctional�[0m
    From worker 4:       �[1m*�[0m �[31mbigint�[0m
    From worker 4:       �[1m*�[0m �[31msorting�[0m
    From worker 4:       �[1m*�[0m �[31mstatistics�[0m
    From worker 2:       �[1m*�[0m �[31mspawn�[0m
    From worker 2:         �[34m[stdio passthrough ok]�[0m
    From worker 3:       �[1m*�[0m �[31mpriorityqueue�[0m
    From worker 3:       �[1m*�[0m �[31marpack�[0m
    From worker 4:       �[1m*�[0m �[31mfile�[0m
    From worker 2:       �[1m*�[0m �[31msuitesparse�[0m
    From worker 2:       �[1m*�[0m �[31mversion�[0m
    From worker 2:       �[1m*�[0m �[31mresolve�[0m
    From worker 3:       �[1m*�[0m �[31mpollfd�[0m
    From worker 3:       �[1m*�[0m �[31mmpfr�[0m
    From worker 4:       �[1m*�[0m �[31mbroadcast�[0m
    From worker 3:       �[1m*�[0m �[31mcomplex�[0m
    From worker 2:       �[1m*�[0m �[31msocket�[0m
    From worker 3:       �[1m*�[0m �[31mfloatapprox�[0m
    From worker 3:       �[1m*�[0m �[31mregex�[0m
    From worker 2:       �[1m*�[0m �[31mreaddlm�[0m
    From worker 3:       �[1m*�[0m �[31mfloat16�[0m
    From worker 3:       �[1m*�[0m �[31mcombinatorics�[0m
exception on 2: ERROR: BoundsError()
while loading readdlm.jl, in expression starting on line 4
ERROR: BoundsError()
while loading readdlm.jl, in expression starting on line 4
while loading /builddir/build/BUILD/julia-master/test/runtests.jl, in expression starting on line 46
make: *** [all] Error 1
@ViralBShah
Copy link
Member

Cc @tanmaykm

@tanmaykm
Copy link
Member

tanmaykm commented May 6, 2014

Strange... Is the perf/kernel/imdb-1.tsv file present and can be found from where the tests are running? Sorry, I don't have access to a Fedora machine, could you run the failing test from the REPL?

@nalimilan
Copy link
Member Author

@tanmaykm As I said, I'm not able to reproduce the bug on my machine, only on the Fedora build VM. Could you suggest a few commands I could add so that the needed debug info is printed when building the package?

@tanmaykm
Copy link
Member

tanmaykm commented May 6, 2014

Sure.

Start the REPL in the test folder:

cd test
julia

And run the following in the REPL:

f = joinpath("perf", "kernel", "imdb-1.tsv")
isfile(f)
dlm_data = readdlm(joinpath("perf", "kernel", "imdb-1.tsv"), '\t')

@nalimilan
Copy link
Member Author

@tanmaykm I can't reproduce the bug at the REPL. It would be nice if you could think of a few things which could fail, so that I put a series of debugging statements in my RPM package, and get the logs after the build runs. I'll test that the file exists, but I can't see any reason why it could be missing, so I'd like to test a few other possibilities if you have ideas (each build takes some time to start).

@tanmaykm
Copy link
Member

tanmaykm commented May 6, 2014

A complete traceback of the exception would help identify the source line where BoundsError occurred and then we can add more debug statements. So, if the file exists, just a readdlm of that file would print the exception stack.

@nalimilan
Copy link
Member Author

OK, got it. I think this is related to the fact that I build the RPM package against LLVM 3.4. Unfortunately, that's the only available version in Fedora 20.

I've eventually simplified the command to this:

../julia -e 'readdlm("perf/kernel/imdb-1.tsv", '\''t'\'')'
ERROR: BoundsError()

And I'm able to reproduce this locally when using LLVM 3.4. Do you think it's worth trying to fix? Without support for this version I'm not sure I'll be able to package Julia for Fedora -- though I need to investigate this issue since in the long-term this may prove problematic.

@nalimilan nalimilan changed the title readdlm test failure on 64-bit Fedora readdlm test failure on 64-bit Fedora with LLVM 3.4 May 6, 2014
@ViralBShah
Copy link
Member

I use llvm 3.4 on Mac and haven't seen this failure.

@nalimilan
Copy link
Member Author

That's weird. Going back to 3.3 (where possible, it's a little hackish on Fedora 20 now) clearly fixed the problem.

@nalimilan
Copy link
Member Author

Any ideas about how I could debug the problem further?

@ihnorton
Copy link
Member

I can reproduce on ubuntu 13.10. Here's a backtrace (using #6737)

julia> readdlm("test/perf/kernel/imdb-1.tsv", '\t')
ERROR: BoundsError()
 in julia_setindex!;17662 at ./array.jl:306
 in julia_colval;17661 at ./datafmt.jl:315
 in julia_store_cell;17660 at ./datafmt.jl:193
 in julia_dlm_fill;17645 at ./datafmt.jl:294
 in julia_dlm_fill;17645 at ./datafmt.jl:307
 in julia_readdlm_string;17635 at ./datafmt.jl:269
 in julia___readdlm_auto#124__;17620 at ./datafmt.jl:58
 in julia___readdlm#122__;17619 at ./datafmt.jl:49
 in julia___readdlm#121__;17618 at ./datafmt.jl:47
 in julia_eval_user_input;17600 at ./REPL.jl:53

@nalimilan
Copy link
Member Author

Cool! :-)

@tanmaykm
Copy link
Member

I was able to replicate this on a ubuntu 13.10 machine as well.

Also observed that in method dlm_fill, the values in tuple dims mysteriously change sometime after the call to DLMStore at line 293 (dh = DLMStore(T, dims, has_header, sbuff, auto, eol)).

Adding debug statements in the method shifts the problem around. Looks like some corruption?

@ViralBShah
Copy link
Member

Was that in llvm 3.4 or 3.3?

@tanmaykm
Copy link
Member

llvm 3.4

@ihnorton
Copy link
Member

Also happens in 3.5.

@JeffBezanson JeffBezanson removed this from the 0.3 milestone May 14, 2014
@JeffBezanson
Copy link
Member

Removing 0.3 milestone, as 0.3 uses LLVM 3.3 only. We can reprioritize if this turns out to happen with LLVM 3.3 as well.

@nalimilan
Copy link
Member Author

Yet it's going to prevent me from packaging 0.3 in Fedora. Not saying it should be a blocker, but it's still relatively annoying.

@JeffBezanson
Copy link
Member

Is that because fedora will only support llvm 3.4? They really shouldn't do that. Different versions of llvm are in general quite incompatible.

@nalimilan
Copy link
Member Author

Yeah, that's something I'm going to discuss with them. LLVM maintainers recently updated to 3.4, and for now I seem to be forced to follow this change.

@ViralBShah
Copy link
Member

That is a bummer if we can't be in the next fedora release.

@nalimilan
Copy link
Member Author

I've just asked LLVM Fedora maintainers about this problem: https://bugzilla.redhat.com/show_bug.cgi?id=1098534

@ViralBShah
Copy link
Member

If the fix does not require major work, it would be really nice if we can fix this issue for LLVM 3.4 for the 0.3 release.

@JeffBezanson
Copy link
Member

Yeah, that and all the other bugs we don't yet know about.

@ViralBShah
Copy link
Member

I am sure there are others. Let's see what the Fedora LLVM maintainers say. Otherwise, IIUC, we could be in the fedora-updates repository.

@nalimilan
Copy link
Member Author

Yeah, there may well be other bugs hidden somewhere... At least tests should catch the most important ones.

@ViralBShah fedora-updates is the normal place were new packages appear, and that's really not a problem to be there. The question is rather, can Julia be included at all? But we'll likely figure out a solution.

@StefanKarpinski
Copy link
Member

I completely understand you not wanting to maintain LLVM 3.3 for Debian, but declaring a version that is less than a year old completely dead is kind of strange. We may just have to declare that LLVM is part of the Julia source and not even pretend that linking against it as a shared library provided by the distro is going to work.

@svillemot
Copy link
Contributor

@StefanKarpinski The decision that LLVM 3.3 is dead was taken by the LLVM developers, not by Debian. Embedding a copy of LLVM 3.3 in Julia will not solve the issue, at least for Debian, for reasons explained above.

@svillemot
Copy link
Contributor

I understand that the constraints of a distro like Debian are different than yours. And at a personal level, I cannot do much about the decision to remove LLVM 3.3 (I tried to argue with the Debian maintainer), and I cannot embed a copy of LLVM 3.3 in the Julia package because this completely goes against the spirit of a distro and could create problems with other parties (such as the security team). My current plan is to ship Julia 0.3 with LLVM 3.5. But if this is something that you definitely don't want to see happen because it could give a bad image of Julia, then the only option left is to not have Julia in the next Debian release. This would be sad, but this is not the end of the world either.

@tkelman
Copy link
Contributor

tkelman commented Aug 9, 2014

Having a buggy version of Julia in the distribution is probably better and easier to fix later than no version at all, though that's up to Stefan, Jeff, Viral etc to decide. Sounds like this maintenance policy question really needs to be raised on the LLVM list, as more and more downstream projects depend on and distributions need to include specific older versions, and this is starting to interact poorly with LLVM's fast break-everything development style (which has other benefits, but gets in the way here). I know Rust has their own set of problems wrt distributions and bootstrapping, but what happens when Rust hits 1.0 for example? They're probably going to need to support a fixed version of LLVM for a while.

@JeffBezanson
Copy link
Member

I wasn't under the impression that Debian liked to ship buggy versions of things, but apparently they prefer that to allowing us to link to, you know, our dependencies.

We could stand to get a better sense of how well julia 0.3 works with LLVM 3.5. We know there is a failing test, but we should try commenting out that test and see if everything else passes at least.

@nalimilan
Copy link
Member Author

I'm currently discussing the same issue for Fedora. I may be allowed to use LLVM 3.3 for the next release, but it's not sure yet.
https://bugzilla.redhat.com/show_bug.cgi?id=1098534
https://bugzilla.redhat.com/show_bug.cgi?id=1109390

As more out-of-tree compilers like Julia and Rust rely on LLVM, it seems they really should review their release policy.

@svillemot
Copy link
Contributor

@JeffBezanson On my 64-bit machine, I can confirm that readdlm is the only failing test with LLVM 3.5.

@JeffBezanson
Copy link
Member

That's nice. Keno and several others have been going above & beyond to try to keep julia working with LLVM 3.5 reasonably well.

@ihnorton
Copy link
Member

ihnorton commented Aug 9, 2014

3.5 has OldJIT still... we could switch that back on with 3.5, although
that setup is untested - whereas several of us have been using mcjit
regularly.
On Aug 9, 2014 3:54 PM, "Jeff Bezanson" [email protected] wrote:

That's nice. Keno and several others have been going above & beyond to try
to keep julia working with LLVM 3.5. reasonably well.


Reply to this email directly or view it on GitHub
#6757 (comment).

@tknopp
Copy link
Contributor

tknopp commented Aug 9, 2014

Has anybody tested the performance of the old jit vs mcjit? I remember having read that the old jit was faster.

Is it really an issue to have a package that has a statically linked llvm3.3? How do we do it with libuv? Isn't our version patched so that we cannot use upstream?

@Keno
Copy link
Member

Keno commented Aug 9, 2014

I'll have another go at the readdlm thing next week. Hopefully I can figure it out in time for the 3.5 release.

@svillemot
Copy link
Contributor

@tknopp Indeed libuv is included as an embedded copy in the Debian package. But first, this is supposed to be a temporary situation (see the JuliaLang/libuv#2). And second, libuv is a much smaller codebase than LLVM. Which means that it is much more manageable to maintain a forked embedded copy of libuv than one of LLVM.

@Keno
Copy link
Member

Keno commented Aug 17, 2014

I've started to run some experiments on which IR passes cause this in order to try to identify the IR pattern that gets miscompiled. First results:

julia> for i = 1:length(julia_passes)
       x = run_test(julia_passes[1:i])
       @show (i,x)
       end
(i,x) => (1,true)
(i,x) => (2,true)
(i,x) => (3,false)
(i,x) => (4,true)
(i,x) => (5,true)
(i,x) => (6,true)
(i,x) => (7,true)
julia: /home/kfischer/julia-master/deps/llvm-svn/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp:308: void llvm::RuntimeDyldELF::resolveX86_64Relocation(const llvm::SectionEntry&, uint64_t, uint64_t, uint32_t, int64_t, uint64_t): Assertion `RealOffset <= (2147483647) && RealOffset >= (-2147483647-1)' failed.

Encouraging isn't it ;)?

EDIT: true indicates it ran successfully, false indicates it ran into the BoundsError problem. Assertion failure indicates probably a failure in LLVM.

@Keno
Copy link
Member

Keno commented Aug 17, 2014

And in reverse:

e => BoundsError()
(length(julia_passes) - i,x) => (20,false)
e => BoundsError()
(length(julia_passes) - i,x) => (19,false)
e => BoundsError()
(length(julia_passes) - i,x) => (18,false)
e => BoundsError()
(length(julia_passes) - i,x) => (17,false)
e => BoundsError()
(length(julia_passes) - i,x) => (16,false)
e => BoundsError()
(length(julia_passes) - i,x) => (15,false)
e => BoundsError()
(length(julia_passes) - i,x) => (14,false)
e => BoundsError()
(length(julia_passes) - i,x) => (13,false)
e => BoundsError()
(length(julia_passes) - i,x) => (12,false)
e => BoundsError()
(length(julia_passes) - i,x) => (11,false)
e => BoundsError()
(length(julia_passes) - i,x) => (10,false)
(length(julia_passes) - i,x) => (9,true)

@Keno
Copy link
Member

Keno commented Aug 17, 2014

Seems to be a problem with LLVM's Stack Slot coloring:

julia> run_test()
e => BoundsError()
false

julia> pass_llvm_command_line("-disable-ssc")

julia> run_test()
true

@Keno
Copy link
Member

Keno commented Aug 17, 2014

Specifically, stack slot sharing:

julia> run_test()
e => BoundsError()
false

julia> pass_llvm_command_line("-no-stack-slot-sharing")

julia> run_test()
true

Feels like I'm getting closer :)

@Keno
Copy link
Member

Keno commented Aug 18, 2014

Here's a debug dump from the stack coloring pass:

********** Stack Slot Coloring **********
********** Function: julia_dlm_fill19994
Spill slot intervals:
SS#6 [96r,544B:0)[704B,800B:0)[960B,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5328B:0)[5888B,6864r:0)  0@x
SS#7 [744r,800B:0)[960B,2016r:0)[5888B,7552r:0)  0@x
SS#8 [736r,800B:0)[960B,1920r:0)[5888B,7456r:0)  0@x
SS#9 [728r,800B:0)[960B,1904r:0)[5888B,7440r:0)  0@x
SS#10 [712r,800B:0)[960B,1632r:0)[5888B,7168r:0)  0@x
SS#11 [2256r,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5344r:0)  0@x
SS#12 [720r,800B:0)[960B,1776r:0)[5888B,7312r:0)  0@x
SS#13 [3792r,4112B:0)[4272B,4368B:0)[4528B,5072r:0)  0@x

Color spill slot intervals:
Assigning fi#13 to fi#6
Assigning fi#11 to fi#7
Assigning fi#6 to fi#8
Assigning fi#10 to fi#6
Assigning fi#7 to fi#7
Assigning fi#8 to fi#9
Assigning fi#9 to fi#10
Assigning fi#12 to fi#11

Spill slots after coloring:
SS#6 [96r,544B:0)[704B,800B:0)[960B,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5328B:0)[5888B,6864r:0)  0@x
SS#7 [744r,800B:0)[960B,2016r:0)[5888B,7552r:0)  0@x
SS#8 [736r,800B:0)[960B,1920r:0)[5888B,7456r:0)  0@x
SS#11 [2256r,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5344r:0)  0@x
SS#10 [712r,800B:0)[960B,1632r:0)[5888B,7168r:0)  0@x
SS#9 [728r,800B:0)[960B,1904r:0)[5888B,7440r:0)  0@x
SS#13 [3792r,4112B:0)[4272B,4368B:0)[4528B,5072r:0)  0@x
SS#12 [720r,800B:0)[960B,1776r:0)[5888B,7312r:0)  0@x

Removing unused stack object fi#12
Removing unused stack object fi#13
********** Stack Slot Coloring **********
********** Function: julia_dlm_fill19994
Spill slot intervals:
SS#6 [96r,544B:0)[704B,800B:0)[960B,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5328B:0)[5888B,6864r:0)  0@x
SS#7 [744r,800B:0)[960B,2016r:0)[5888B,7552r:0)  0@x
SS#8 [736r,800B:0)[960B,1920r:0)[5888B,7456r:0)  0@x
SS#9 [728r,800B:0)[960B,1904r:0)[5888B,7440r:0)  0@x
SS#10 [712r,800B:0)[960B,1632r:0)[5888B,7168r:0)  0@x
SS#11 [2256r,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5344r:0)  0@x
SS#12 [720r,800B:0)[960B,1776r:0)[5888B,7312r:0)  0@x
SS#13 [3792r,4112B:0)[4272B,4368B:0)[4528B,5072r:0)  0@x

Color spill slot intervals:
Assigning fi#13 to fi#6
Assigning fi#11 to fi#7
Assigning fi#6 to fi#8
Assigning fi#10 to fi#9
Assigning fi#7 to fi#10
Assigning fi#8 to fi#11
Assigning fi#9 to fi#12
Assigning fi#12 to fi#13

Spill slots after coloring:
SS#6 [96r,544B:0)[704B,800B:0)[960B,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5328B:0)[5888B,6864r:0)  0@x
SS#7 [744r,800B:0)[960B,2016r:0)[5888B,7552r:0)  0@x
SS#8 [736r,800B:0)[960B,1920r:0)[5888B,7456r:0)  0@x
SS#9 [728r,800B:0)[960B,1904r:0)[5888B,7440r:0)  0@x
SS#13 [3792r,4112B:0)[4272B,4368B:0)[4528B,5072r:0)  0@x
SS#11 [2256r,2592B:0)[2752B,2816B:0)[2976B,3056B:0)[3216B,3296B:0)[3456B,3536B:0)[3696B,4112B:0)[4272B,4368B:0)[4528B,5344r:0)  0@x
SS#10 [712r,800B:0)[960B,1632r:0)[5888B,7168r:0)  0@x
SS#12 [720r,800B:0)[960B,1776r:0)[5888B,7312r:0)  0@x

The second one is with sharing disabled. As you can see the first one deletes two stack slots. Of course this isn't necessarily a problem with the stack coloring pass. It might also be a problem with the analysis pass that it uses.

@Keno
Copy link
Member

Keno commented Aug 18, 2014

@ihnorton Why were you thinking the tuple is passed on the stack. From looking at the generated code, the callee is expecting it in xmm.

@Keno
Copy link
Member

Keno commented Aug 18, 2014

As far as I can tell, the stack coloring transformation is correct, so I wonder what's going on.

@Keno Keno closed this as completed in 6a1cc96 Aug 18, 2014
Keno added a commit that referenced this issue Aug 18, 2014
this issue is quite subtle and originally I had assumes it was a bug in LLVM,
because it heavily depends on which optimizations are enabled, but it turns
out that it's actually a missing annotation in our IR. Before I describe what
the underlying issue is, there is a similar description in the commit
by @isanbard when this was originally found (in Clang, I presume) at [1].

The dlm_fill function has an interesting pattern, where, when it can't parse
the given data with the requested type, it will try again by calling itself
with a more generictype. This and the call to store_cell however, go through dynamic
dispatch, so the dims tuple that is passed in unboxed, needs to be reboxed for
dynamic  dispatch. The problem seen in the original issue stemmed from the fact that
the dims tuple, which was originally passed in as (2,2), suddenly became (1,2)
for the second call. This happened by the following mechanism:

dims=(2,2) gets passed in via XMM0 and gets almost immidiately spilled to the stack.
Sometime before the generic call to store_cell (and potentially eariler functions),
dims is reloaded from the stack slot in order to extract the components and box
them individually for the generic call. At this point LLVM thinks that there is
no use for the original stack slot in which dims was stored anymore and proceeds
to reuse it (the optimization pass responsible for this is StackSlotColoring).
We now get to store_cell, which throws an exception that longjmps us back to
almost the beginning of dlm_fill where dims was still in the stack slot. However,
said stack slot has been overwritten, thus causing the corrupted dims tuple
in the second go-around.

The solution to this on the llvm side is shown in [1] below, i.e. simply disabling
the StackSlotColoring pass. However, this check did not fire on our IR, because
we weren't marking the call site as return twice. For better or for worse, LLVM
requires most attributes to be duplicated on the call site as well as the callee
(where we had already set this attribute).

[1] llvm-mirror/llvm@5edfbdc
@ihnorton
Copy link
Member

Wow, nice sleuthing!

(I missed your question last night...)

Why were you thinking the tuple is passed on the stack.

limping through the disassembly, more or less.

@Keno
Copy link
Member

Keno commented Aug 18, 2014

Also, for posterity and general enjoyment, here's the code I wrote to track this down: https://gist.github.com/Keno/10a0c57dbb266e46df84

@JeffBezanson
Copy link
Member

I see you use .jl as the extension for your C++ source files now :)

The application of julia metaprogramming to C++ here is absolutely mind-boggling.

@timholy
Copy link
Member

timholy commented Aug 18, 2014

Whoa. When you go bug-hunting, you carry some heavy weaponry! Poor things don't stand a chance.

@Keno
Copy link
Member

Keno commented Aug 18, 2014

The application of julia metaprogramming to C++ here is absolutely mind-boggling.

Yeah, I love the

    for p in passes
        add(FPM, @eval @cxx $p())
    end

@StefanKarpinski
Copy link
Member

This is nuts. @Keno, you've outdone yourself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
upstream The issue is with an upstream dependency, e.g. LLVM
Projects
None yet
Development

No branches or pull requests