-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nextfloat(::BigFloat, n) does not exist #31276
Comments
The most basic approach to implementing this would be to just iterate the single-argument |
without any special cases (there may be none)
the above matches iterating the function in my few tests |
I would worry about crossing an eps-size boundary. From larger eps to smaller eps the increment could end up being too big. In the other direction I think it could also end up being too small. |
That is correct.
|
for reference (from Base/mpfr.jl)
|
We could have |
That seems like a good idea. I would be in favour of exposing more of the mutating MPFR API, but clearly document that they should only be used on values that are created within the scope of the function. |
This duplicates a BigFloat
|
but it forms the duplicate using the global precision for BigFloat even when the
|
|
My thought was |
|
Instead of using the MPFR library, can't we solve this with native implementation? There is already a GSOC project |
It would be outstanding for that project to reimplement both GMP Ints and MPFR Floats in Julia. |
I am interested in solving this issue. So, for now, should I try to write native code or use the MPFR library as you mentioned above? |
great! use the library. This is a small part of large undertaking otherwise. Its best to solve it simply. |
I tried following code in the Julia REPL. According to these results, julia> x = BigFloat(1, 32) # BigFloat of precision 32
1.0
julia> x.prec # Precision of x is 32
32
julia> deepcopy(x).prec # deepcopy of x has precision 32
32
julia> BigFloat(1).prec # Default precision is 256
256 |
Yes, it does. The implementation above that is specialized for BigFloats also preserves the precision and it allocates less and it runs faster:
You are not required to use the specialized implementation; do you have a reason to avoid it? |
No. While experimenting, I found this and thought that specialized implementation might be redundant. But I had not benchmarked the code. I will definitely include the specialized implementation now. |
julia> x = BigFloat(1, 100)
1.0
julia> x.prec
100
julia> nextfloat(x).prec
256 The precision of |
The precision should be maintained. The source code above does this. Please use it when developing your PR. |
Do we encourage extending Note that you no longer need z = BigFloat(;precision=precision(x))
ccall((:mpfr_set, :libmpfr), Int32, (Ref{BigFloat}, Ref{BigFloat}, Int32), z, x, 0) |
In this specific instance .. where the PR is into Base, I don't see this as an external extension of |
According to docs for julia> @btime Base.deepcopy_internal(x, IdDict())
88.254 ns (4 allocations: 464 bytes)
julia> @btime customized_deepcopy(x)
30.218 ns (2 allocations: 96 bytes) # these results are obtained without using setprecision() I should also mention that deepcopy_internal() is only used in |
I guess defining Or maybe the |
Since they have to share core, its goof-proofier to define deepcopy_internal. More substantively, having some type be the exception is unhelpful to a language that works at having no barriers to its fluid design. |
Do all specializations of |
Should I wait for the final solution for |
I tried simulating the issue with given functions and i got the following results: function memoized_deepcopy(x::BigFloat, n::Int)
for i=1:n
deepcopy(x) == x # currrent implementation of deepcopy which has memoized buit-in
end
end
function proposed_deepcopy(x::BigFloat)
z = BigFloat(;precision=x.prec)
ccall((:mpfr_set, :libmpfr), Int32, (Ref{BigFloat}, Ref{BigFloat}, Int32), z, x, 0)
return z
end
function non_memoized_deepcopy(x::BigFloat, n::Int)
for i=1:n
proposed_deepcopy(x) == x
end
end
julia> using BenchmarkTools
julia> @btime memoized_deepcopy(BigFloat(1), 100)
9.839 μs (402 allocations: 46.98 KiB)
julia> @btime non_memoized_deepcopy(BigFloat(1), 100)
3.152 μs (202 allocations: 11.05 KiB)
julia> @btime memoized_deepcopy(BigFloat(1), 10000000)
1.262 s (40000002 allocations: 4.47 GiB)
julia> @btime non_memoized_deepcopy(BigFloat(1), 10000000)
361.788 ms (20000002 allocations: 1.04 GiB) |
@JeffreySarnoff I think it is to deal with cases like: julia> x = fill(zero(BigFloat), 100);
julia> x[1].d == x[2].d
true
julia> y = deepcopy(x);
julia> y[1].d == y[2].d
true |
By those timings, the nonmemoized versions are noticeably snappier. Apparently, Is it really that much of a drag? |
I would suggest avoiding using |
That is a prickly 🌵 thing, Simon. |
So, lets use our own unattached function name for the BigFloat duplication logic you have sketched.
|
Should we also introduce a duplicate(x::BigInt) function because similar speedup can be achieved there too? We would need to include these points in the documentation.
|
"Memoization" is a bit of a misnomer here --- the purpose of the dictionary is not to avoid redundant computation, it's to ensure the new object has the exact same reference structure as the original. So it's expected to be more expensive. However, of course this is not necessary just to deepcopy one BigFloat. So that's an optimization we could try to do in |
Would that mean two classes of deepcopy-able things, those that replicate best without the support dictionary and all the others? |
@narendrakpatel That is not a good approach. |
a simpler option might be to add a Lines 183 to 192 in 1fc4380
|
So |
As a matter of personal programming policy, every type I create includes |
I think that the infra-deepcopy-code will move to let values of some types be copied without involving a reference dict and provide reference dict enabled copying for types that have not demonstrated safety or have not assured logical consistency in duplication absent a referentce dict. The internal_deepcopy specializations will continue to provide the buffer they were designed to provide, as they (the information and understanding required to better do that) improve [my present take]. The manner of supporting duplication absent reference dict for types like sketched in straw Provide a Use that constructor to duplicate BigFloat values where needed to perform Follow @simonbyrne in implementing the constructor with the keyword Follow @StefanKarpinski in implementing and I'll keep editing the code above to reflect new refinements from y'all |
@narendrakpatel please formulate some tests as you review the code above. |
I noticed that function BigFloat(x::BigFloat, r::MPFRRoundingMode=ROUNDING_MODE[];
precision::Integer=DEFAULT_PRECISION[], new::Bool=false)
precision == MPFR.precision(x) && !new && return x
z = BigFloat(; precision = new ? MPFR.precision(x) : precision)
ccall((:mpfr_set, :libmpfr), Int32, (Ref{BigFloat}, Ref{BigFloat}, MPFRRoundingMode),
z, x, r)
return z
end It works perfectly. But this way is a bit slower than the one mentioned in code above. Only way that I could think of to get the performance benfits of above function is to define a new method like |
please try this revision One may request a BigFloat that is new and of a specified precision that differs from the precision of the BigFloat arg |
Consider the case when the user wants to create a julia> x = BigFloat(12.123,100) # x with precision 100
12.122999999999999332089828385506
julia> a = BigFloat(x) # x with default(high) precision
12.1229999999999993320898283855058252811431884765625
julia> (a==x, a===x)
(true, false)
julia> a = BigFloat(x, new=true) # deepcopy of x
12.122999999999999332089828385506
julia> (a==x, a===x)
(true, false)
julia> a = BigFloat(x, precision(x)) # copy of x
12.122999999999999332089828385506
julia> (a==x, a===x)
(true, true) I believe this is a bit cleaner syntax and makes more sense semantically. What are your views? |
Working from solution to implementation is sound. FYI the use of The Of course, we may establish a convention; I'd encourage that which best serves Julia as it allows us to develop things that just work. And the other ways to assign precision would be available by following what obtains most properly therefrom and smoothly therewith. |
a pleasing look # either
# go with the initial setting for BigFloat precision as the default _256 bits_.
# (which occurs through doing nothing)
# or
setprecision(BigFloat, digits=D, base=B)
# establish a default precision, a contextually uniform value, a backstop.
# if `base` is given and `digits` is unspecified, `digits` is inferred
# to best match the current default (which is kept as a number of bits)
# so do nothing or just keep the given base and the the number of digits
# in that base which corresponds to current default precision in bits.
# if `digits` is given and `base` is unspecified, `digits` is a bit count.
𝑏𝑓 = BigFloat(pi)
precision(𝑏𝑓) == <current defaultfor BigFloats, a bit count>
# ----
𝑏𝑓₁ = BigFloat(𝑏𝑓, precision=100)
𝑏𝑓₂ = BigFloat(𝑏𝑓₁) # !revised!
# 𝑏𝑓₂ === 𝑏𝑓₁ && 𝑏𝑓₂ == 𝑏𝑓₁ && precisions match
# ThisType(x::ThisType) === x # correction
𝑏𝑓₂ = BigFloat(𝑏𝑓₁ , new=true)
# 𝑏𝑓₂ !== 𝑏𝑓₁ && 𝑏𝑓₂ == 𝑏𝑓₁ && precisions match
𝑏𝑓₂ = BigFloat(𝑏𝑓₁ , new=false)
# 𝑏𝑓₂ === 𝑏𝑓₁ && 𝑏𝑓₂ == 𝑏𝑓₁ && precisions match
# ----
𝑏𝑓₂ = BigFloat(𝑏𝑓₁ , precision=128)
# 𝑏𝑓₂ !== 𝑏𝑓₁ && 𝑏𝑓₂ == 𝑏𝑓₁ && precisions differ
𝑏𝑓₂ = BigFloat(𝑏𝑓₁ , precision=128, new=true)
# 𝑏𝑓₂ !== 𝑏𝑓₁ && 𝑏𝑓₂ == 𝑏𝑓₁ && precisions differ
𝑏𝑓₂ = BigFloat(𝑏𝑓₁ , precision=128, new=false)
# 𝑏𝑓₂ !== 𝑏𝑓₁ && 𝑏𝑓₂ == 𝑏𝑓₁ && precisions differ
# new=false is ignored because precisions differ
# a warning about ignoring `new=false` may be given
|
I came up with this implementation: function BigFloat(x::BigFloat, r::MPFRRoundingMode=ROUNDING_MODE[]; precision::Integer=DEFAULT_PRECISION[], new::Bool=true)
precision == DEFAULT_PRECISION[] && !new && return x
z = BigFloat(; precision = precision == DEFAULT_PRECISION[] ? MPFR.precision(x) : precision)
ccall((:mpfr_set, :libmpfr), Int32, (Ref{BigFloat}, Ref{BigFloat}, MPFRRoundingMode),
z, x, r)
return z
end Although this implementation has the following bug: If the user passes julia> x = BigFloat(pi, 100)
3.1415926535897932384626433832793
julia> a = BigFloat(x, precision=256)
3.1415926535897932384626433832793
julia> (a!==x, a==x, a.prec!=x.prec)
(true, true, false) Approach: Replace DEFAULT_PRECISION[] with 0. Note that, BigFloat cannot have 0 precision, thus no function will be calling |
We are grateful for your critical attention. As the specifics of the approach we adopt are likely to become followed elsewhere, and there are other stakeholders already joined to this issue, it makes sense to allow them time to see this and consider it -- so let's give the others at least a day and a half to weigh in before you take this to a PR (at which point, commentary and suggestions will appear in the PR's thread). At the very least, let me reread this from a more caffeinated perspective ☕ upon rejoining this new day 🌄. In any event, it is reasonable for you to move into PR preparation this evening or sometime tomorrow. I'll help you with that if you run into process questions; and there is the Slack channel #my-first-pr with more help and advise. |
Elevating the integer zero to carry nonnumerical information when appearing as a precision is not an approach that plays well with Julia. A "float value" that is represented with a significand of 0 bits would be something akin to the empty integer -- not knowing what that is, I prefer to compute using entities I grasp. Walking away from C/Unix styles [no disrespect, C paved much] becomes refreshing. We avoid sentinelizing generic values, using call signatures that may incorporate singleton structs or fixedpoint value-types to guide multidispatch. |
On rereading, I revised my "pleasing look" to respect The pleasing look entails a function (a)
|
@narendrakpatel go ahead .. let's carry this forward as a PR |
For most floating point types,
nextfloat(x, n)
andprevfloat(x, n)
are defined to be the nth next or nth previous floating point value. BigFloats support neither.The text was updated successfully, but these errors were encountered: