Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vectorized float() deprecations #22966

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion base/bitarray.jl
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@ similar(B::BitArray, dims::Dims) = BitArray(dims...)

similar(B::BitArray, T::Type{Bool}, dims::Dims) = BitArray(dims)
# changing type to a non-Bool returns an Array
# (this triggers conversions like float(bitvector) etc.)
# (this triggers conversions like float.(bitvector) etc.)
similar(B::BitArray, T::Type, dims::Dims) = Array{T}(dims)

function fill!(B::BitArray, x)
Expand Down
4 changes: 4 additions & 0 deletions base/deprecated.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1593,6 +1593,10 @@ end

@deprecate momenttype(::Type{T}) where {T} typeof((zero(T)*zero(T) + zero(T)*zero(T))/2) false

# PR #22966
@dep_vectorize_1arg Number float
@dep_vectorize_1arg AbstractString float

# END 0.7 deprecations

# BEGIN 1.0 deprecations
Expand Down
22 changes: 7 additions & 15 deletions base/float.jl
Original file line number Diff line number Diff line change
Expand Up @@ -850,21 +850,13 @@ truncmask(x, mask) = x

## Array operations on floating point numbers ##

float(A::AbstractArray{<:AbstractFloat}) = A

function float(A::AbstractArray{T}) where T
if !isleaftype(T)
error("`float` not defined on abstractly-typed arrays; please convert to a more specific type")
end
convert(AbstractArray{typeof(float(zero(T)))}, A)
end

float(r::StepRange) = float(r.start):float(r.step):float(last(r))
float(r::UnitRange) = float(r.start):float(last(r))
float(r::StepRangeLen) = StepRangeLen(float(r.ref), float(r.step), length(r), r.offset)
function float(r::LinSpace)
LinSpace(float(r.start), float(r.stop), length(r))
end
# Optimizations for broadcasting float()
broadcast(::typeof(float), A::AbstractArray{<:AbstractFloat}) = A
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, @andreasnoack do you mean we should or should not have this optimization?

At the moment I left it in here to duplicate the old behavior of avoiding the copy. A counter argument might be that people usually expect broadcast to make a copy of the data, and optimizing that away in particular cases is too confusing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, sorry. I missed that "adding a few broadcast specializations" was exactly about this issue. I'm not sure what my opinion is. Always getting a copy is easy to reason about but avoiding an unnecessary copy is also a good thing.

Copy link
Member Author

@c42f c42f Jul 26, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, I'm not sure what's better here either. At the moment I went for the faster version, as it allows broadcast to be a drop in replacement for vectorized float().

Do we have any precedent for not returning a copy of the data in broadcast()? I had a quick look through Base, and all I found was was that broadcast(::typeof(xor), B::BitArray, x::Bool) takes the opposite stance and calls copy() when it wouldn't otherwise be required.

# Ranges
broadcast(::typeof(float), r::StepRange) = float(r.start):float(r.step):float(last(r))
broadcast(::typeof(float), r::UnitRange) = float(r.start):float(last(r))
broadcast(::typeof(float), r::StepRangeLen) = StepRangeLen(float(r.ref), float(r.step), length(r), r.offset)
broadcast(::typeof(float), r::LinSpace) = LinSpace(float(r.start), float(r.stop), length(r))

# big, broadcast over arrays
# TODO: do the definitions below primarily pertaining to integers belong in float.jl?
Expand Down
4 changes: 2 additions & 2 deletions base/linalg/bitarray.jl
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,8 @@ end

## norm and rank

svd(A::BitMatrix) = svd(float(A))
qr(A::BitMatrix) = qr(float(A))
svd(A::BitMatrix) = svd(float.(A))
qr(A::BitMatrix) = qr(float.(A))

## kron

Expand Down
6 changes: 3 additions & 3 deletions base/linalg/dense.jl
Original file line number Diff line number Diff line change
Expand Up @@ -408,7 +408,7 @@ julia> expm(A)
```
"""
expm(A::StridedMatrix{<:BlasFloat}) = expm!(copy(A))
expm(A::StridedMatrix{<:Integer}) = expm!(float(A))
expm(A::StridedMatrix{<:Integer}) = expm!(float.(A))
expm(x::Number) = exp(x)

## Destructive matrix exponential using algorithm from Higham, 2008,
Expand Down Expand Up @@ -926,7 +926,7 @@ function sylvester(A::StridedMatrix{T},B::StridedMatrix{T},C::StridedMatrix{T})
Y, scale = LAPACK.trsyl!('N','N', RA, RB, D)
scale!(QA*A_mul_Bc(Y,QB), inv(scale))
end
sylvester(A::StridedMatrix{T}, B::StridedMatrix{T}, C::StridedMatrix{T}) where {T<:Integer} = sylvester(float(A), float(B), float(C))
sylvester(A::StridedMatrix{T}, B::StridedMatrix{T}, C::StridedMatrix{T}) where {T<:Integer} = sylvester(float.(A), float.(B), float.(C))

sylvester(a::Union{Real,Complex}, b::Union{Real,Complex}, c::Union{Real,Complex}) = -c / (a + b)

Expand All @@ -946,5 +946,5 @@ function lyap(A::StridedMatrix{T}, C::StridedMatrix{T}) where {T<:BlasFloat}
Y, scale = LAPACK.trsyl!('N', T <: Complex ? 'C' : 'T', R, R, D)
scale!(Q*A_mul_Bc(Y,Q), inv(scale))
end
lyap(A::StridedMatrix{T}, C::StridedMatrix{T}) where {T<:Integer} = lyap(float(A), float(C))
lyap(A::StridedMatrix{T}, C::StridedMatrix{T}) where {T<:Integer} = lyap(float.(A), float.(C))
lyap(a::T, c::T) where {T<:Number} = -c/(2a)
2 changes: 0 additions & 2 deletions base/parse.jl
Original file line number Diff line number Diff line change
Expand Up @@ -213,8 +213,6 @@ end

float(x::AbstractString) = parse(Float64,x)

float(a::AbstractArray{<:AbstractString}) = map!(float, similar(a,typeof(float(0))), a)

## interface to parser ##

function parse(str::AbstractString, pos::Int; greedy::Bool=true, raise::Bool=true)
Expand Down
2 changes: 0 additions & 2 deletions base/sparse/sparsematrix.jl
Original file line number Diff line number Diff line change
Expand Up @@ -417,8 +417,6 @@ julia> full(A)
"""
full

float(S::SparseMatrixCSC) = SparseMatrixCSC(S.m, S.n, copy(S.colptr), copy(S.rowval), float.(S.nzval))

complex(S::SparseMatrixCSC) = SparseMatrixCSC(S.m, S.n, copy(S.colptr), copy(S.rowval), complex(copy(S.nzval)))

# Construct a sparse vector
Expand Down
4 changes: 0 additions & 4 deletions base/sparse/sparsevector.jl
Original file line number Diff line number Diff line change
Expand Up @@ -852,10 +852,6 @@ function reinterpret(::Type{T}, x::AbstractSparseVector{Tv}) where {T,Tv}
SparseVector(length(x), copy(nonzeroinds(x)), reinterpret(T, nonzeros(x)))
end

float(x::AbstractSparseVector{<:AbstractFloat}) = x
float(x::AbstractSparseVector) =
SparseVector(length(x), copy(nonzeroinds(x)), float(nonzeros(x)))

complex(x::AbstractSparseVector{<:Complex}) = x
complex(x::AbstractSparseVector) =
SparseVector(length(x), copy(nonzeroinds(x)), complex(nonzeros(x)))
Expand Down
2 changes: 1 addition & 1 deletion base/sparse/umfpack.jl
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ lufact(A::Union{SparseMatrixCSC{T},SparseMatrixCSC{Complex{T}}}) where {T<:Abstr
"Try lufact(convert(SparseMatrixCSC{Float64/Complex128,Int}, A)) for ",
"sparse floating point LU using UMFPACK or lufact(Array(A)) for generic ",
"dense LU.")))
lufact(A::SparseMatrixCSC) = lufact(float(A))
lufact(A::SparseMatrixCSC) = lufact(float.(A))


size(F::UmfpackLU) = (F.m, F.n)
Expand Down
2 changes: 1 addition & 1 deletion test/linalg/lu.jl
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ Hinv = Rational{BigInt}[(-1)^(i+j)*(i+j-1)*binomial(nHilbert+i-1,nHilbert-j)*
for i = big(1):nHilbert,j=big(1):nHilbert]
@test inv(H) == Hinv
setprecision(2^10) do
@test norm(Array{Float64}(inv(float(H)) - float(Hinv))) < 1e-100
@test norm(Array{Float64}(inv(float.(H)) - float.(Hinv))) < 1e-100
end

# Test balancing in eigenvector calculations
Expand Down
2 changes: 1 addition & 1 deletion test/ranges.jl
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@ test_linspace_identity(linspace(1f0, 1f0, 1), linspace(-1f0, -1f0, 1))
# PR 12200 and related
for _r in (1:2:100, 1:100, 1f0:2f0:100f0, 1.0:2.0:100.0,
linspace(1, 100, 10), linspace(1f0, 100f0, 10))
float_r = float(_r)
float_r = float.(_r)
big_r = big.(_r)
@test typeof(big_r).name === typeof(_r).name
if eltype(_r) <: AbstractFloat
Expand Down
4 changes: 2 additions & 2 deletions test/reduce.jl
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@
@test sum([3.0]) === 3.0

z = reshape(1:16, (2,2,2,2))
fz = float(z)
fz = float.(z)
@test sum(z) === 136
@test sum(fz) === 136.0

Expand All @@ -95,7 +95,7 @@ a = sum(sin, z)
@test a ≈ sum(sin.(fz))

z = [-4, -3, 2, 5]
fz = float(z)
fz = float.(z)
a = randn(32) # need >16 elements to trigger BLAS code path
b = complex.(randn(32), randn(32))

Expand Down
6 changes: 3 additions & 3 deletions test/sorting.jl
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ for n in [0:10; 100; 101; 1000; 1001]
for rev in [false,true]
# insertion sort (stable) as reference
pi = sortperm(v, alg=InsertionSort, rev=rev)
@test pi == sortperm(float(v), alg=InsertionSort, rev=rev)
@test pi == sortperm(float.(v), alg=InsertionSort, rev=rev)
@test isperm(pi)
si = v[pi]
@test [sum(si .== x) for x in r] == h
Expand All @@ -245,7 +245,7 @@ for n in [0:10; 100; 101; 1000; 1001]
# stable algorithms
for alg in [MergeSort]
p = sortperm(v, alg=alg, rev=rev)
@test p == sortperm(float(v), alg=alg, rev=rev)
@test p == sortperm(float.(v), alg=alg, rev=rev)
@test p == pi
s = copy(v)
permute!(s, p)
Expand All @@ -257,7 +257,7 @@ for n in [0:10; 100; 101; 1000; 1001]
# unstable algorithms
for alg in [QuickSort, PartialQuickSort(n)]
p = sortperm(v, alg=alg, rev=rev)
@test p == sortperm(float(v), alg=alg, rev=rev)
@test p == sortperm(float.(v), alg=alg, rev=rev)
@test isperm(p)
@test v[p] == si
s = copy(v)
Expand Down
10 changes: 5 additions & 5 deletions test/sparse/cholmod.jl
Original file line number Diff line number Diff line change
Expand Up @@ -467,12 +467,12 @@ for elty in (Float64, Complex{Float64})
@test CHOLMOD.Sparse(CHOLMOD.Dense(A1Sparse)) == A1Sparse
end

Af = float([4 12 -16; 12 37 -43; -16 -43 98])
Af = float.([4 12 -16; 12 37 -43; -16 -43 98])
As = sparse(Af)
Lf = float([2 0 0; 6 1 0; -8 5 3])
LDf = float([4 0 0; 3 1 0; -4 5 9]) # D is stored along the diagonal
L_f = float([1 0 0; 3 1 0; -4 5 1]) # L by itself in LDLt of Af
D_f = float([4 0 0; 0 1 0; 0 0 9])
Lf = float.([2 0 0; 6 1 0; -8 5 3])
LDf = float.([4 0 0; 3 1 0; -4 5 9]) # D is stored along the diagonal
L_f = float.([1 0 0; 3 1 0; -4 5 1]) # L by itself in LDLt of Af
D_f = float.([4 0 0; 0 1 0; 0 0 9])

# cholfact, no permutation
Fs = cholfact(As, perm=[1:3;])
Expand Down
4 changes: 2 additions & 2 deletions test/sparse/sparse.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1225,9 +1225,9 @@ end

@testset "float" begin
A = sprand(Bool, 5,5,0.0)
@test eltype(float(A)) == Float64 # issue #11658
@test eltype(float.(A)) == Float64 # issue #11658
A = sprand(Bool, 5,5,0.2)
@test float(A) == float(Array(A))
@test float.(A) == float.(Array(A))
end

@testset "sparsevec" begin
Expand Down
4 changes: 2 additions & 2 deletions test/sparse/sparsevector.jl
Original file line number Diff line number Diff line change
Expand Up @@ -285,8 +285,8 @@ let a = SparseVector(8, [2, 5, 6], Int32[12, 35, 72])
@test exact_equal(au, SparseVector(8, [2, 5, 6], UInt32[12, 35, 72]))

# float
af = float(a)
@test float(af) == af
af = float.(a)
@test float.(af) == af
@test isa(af, SparseVector{Float64,Int})
@test exact_equal(af, SparseVector(8, [2, 5, 6], [12., 35., 72.]))
@test sparsevec(transpose(transpose(af))) == af
Expand Down
12 changes: 6 additions & 6 deletions test/sparse/umfpack.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@

b = [8., 45., -3., 3., 19.]
x = lua\b
@test x ≈ float([1:5;])
@test x ≈ float.([1:5;])

@test A*x ≈ b
z = complex.(b,zeros(b))
x = Base.SparseArrays.A_ldiv_B!(lua, z)
@test x ≈ float([1:5;])
@test x ≈ float.([1:5;])
@test z === x
y = similar(z)
A_ldiv_B!(y, lua, complex.(b,zeros(b)))
Expand All @@ -42,24 +42,24 @@

b = [8., 20., 13., 6., 17.]
x = lua'\b
@test x ≈ float([1:5;])
@test x ≈ float.([1:5;])

@test A'*x ≈ b
z = complex.(b,zeros(b))
x = Base.SparseArrays.Ac_ldiv_B!(lua, z)
@test x ≈ float([1:5;])
@test x ≈ float.([1:5;])
@test x === z
y = similar(x)
Base.SparseArrays.Ac_ldiv_B!(y, lua, complex.(b,zeros(b)))
@test y ≈ x

@test A'*x ≈ b
x = lua.'\b
@test x ≈ float([1:5;])
@test x ≈ float.([1:5;])

@test A.'*x ≈ b
x = Base.SparseArrays.At_ldiv_B!(lua,complex.(b,zeros(b)))
@test x ≈ float([1:5;])
@test x ≈ float.([1:5;])
y = similar(x)
Base.SparseArrays.At_ldiv_B!(y, lua,complex.(b,zeros(b)))
@test y ≈ x
Expand Down