-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement Float16 parsing in readdlm #16411
Comments
Is there a workaround to load the matrix as |
I think the problem here is that we don't have a native way of parsing Float16: julia> parse(Float16,"0")
ERROR: MethodError: no method matching tryparse(::Type{Float16}, ::String)
Closest candidates are:
tryparse{T<:Integer}(::Type{T<:Integer}, ::AbstractString, ::Int64)
tryparse{T<:Integer}(::Type{T<:Integer}, ::AbstractString)
tryparse(::Type{Float64}, ::String)
...
in parse(::Type{Float16}, ::String) at ./parse.jl:159
in eval(::Module, ::Any) at ./boot.jl:226 You could try reading it in as Alternatively, you could use the julia> csv = CSV.csv("/Users/jacobquinn/Downloads/test_float16.csv";delim=' ',types=[Float16,Float16,Float16])
Data.Table:
2x3 Data.Schema:
col1, col2, col3
Float16, Float16, Float16
Column Data:
Float16[0.0,3.0]
Float16[1.0,2.0]
Float16[2.0,1.0] |
Float16 is not technically a "computational type" (according IEEE), but we've gradually moved in the direction of just supporting all operations on it. If someone wants to implement Float16 parsing, that would be cool. |
readdlm(..., Float16)
cannot parse Integers (but readdlm(..., Float64)
can)
@StefanKarpinski Does that mean computations with Float16 may be slower than Float32/Float64? |
Yes, since they are not implemented in any hardware that I'm aware of. We do Float16 operations by converting to Float32 and back. |
NVIDIA GPUs do Float 16 operations, and so does the Xeon Phi (as "half"s). In fact, the in the Xeon Phi it works in SIMD vectorized operations to do double the number of calculations at a time as floats. Just pointing that out because that means supporting Float 16 can be crucial to targeting accelerator cards. |
Ah, I didn't know this was a common type to be supported on GPUs. All the more reason to treat it as a first-class computational type. |
When you say parsing, do you mean something like |
I think the specific case we've been discussing here is implementing |
OK, thanks. Of course, if were to be treated as a "first class computational type"... but I understand there is reluctance to add more to the language parser. |
See also #15731 |
Related: JuliaLang/LinearAlgebra.jl#330 |
?
|
has been fixed in a2a6d18 |
It does seem to be fixed but github (and I) can't find that commit? |
sorry, that was a tree hash. now corrected |
Thanks, I think this can be closed then. |
Try this. Create a text file with the contents:
Now try to load this file into Julia using
readdlm(filename, Float16)
. It will give an error:However
readdlm(filename, Float64)
works fine.This is problematic. In my particular case, I have a large matrix of small float numbers, some of which are written as exact integers. I don't want to load it as
Float64
to save memory.I am using Julia v.0.4.5
The text was updated successfully, but these errors were encountered: