Skip to content

Commit 5dae366

Browse files
authored
Edit Complexity Notation in #352 (#601)
1 parent 3338b23 commit 5dae366

File tree

1 file changed

+24
-6
lines changed

1 file changed

+24
-6
lines changed

contents/notation/notation.md

+24-6
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,28 @@ In addition, there are many different notations depending on who you ask, but fo
2121
Big $$O$$ assumes the worst, which is the often the most useful description of an algorithm.
2222
On the other hand, $$\Omega$$ assumes the best and $$\Theta$$ is used when the best and worst cases are the same.
2323

24+
It *may* seems like strange that an algorithm can run in different time, but let me explain a while:
25+
```julia
26+
function constant(a::UInt64, b::UInt64)
27+
println(b)
28+
for i=0:18446744073709551615
29+
if(a < b)
30+
b = a - b
31+
println(b)
32+
end
33+
end
34+
end
35+
```
36+
If we calculates the big 3 in b, it will be $$\Omega(1)$$ and $$O(b)$$, Obviously not the same.
37+
The best-case runtime will be $$1$$ `println` statement if a > b, the worst-case runtime will be $$b$$ `println` statement if a = 1.
38+
So that's the explanation, and let's move on.
39+
2440
Of the three Big $$O$$ is used the most, and is used in conversation to mean that the algorithm will take "on the order of" $$n$$ operations.
2541
Unfortunately, at this point, these notations might be a little vague.
2642
In fact, it was incredibly vague for me for a long time, and it wasn't until I saw the notations in action that it all started to make sense, so that's what this section is about: providing concrete examples to better understand computational complexity notation.
2743

44+
######In algorithms below, let us consider that the *slowest* statement is `println`, and we always talk about all the `println` in the function.
45+
2846
## Constant Time
2947

3048
Let's write some code that reads in an array of length `n` and runs with constant time:
@@ -137,9 +155,10 @@ Here is a simple example of a function with exponential runtime:
137155
# Here, n is the number of iterations
138156
function exponential(value::Int64, n::Int64)
139157
println(value)
140-
value += 1
141-
exponential(value, n-1)
142-
exponential(value, n-1)
158+
if(n >= 0)
159+
value += 1
160+
exponential(value, n-1)
161+
exponential(value, n-1)
143162
end
144163
```
145164

@@ -152,16 +171,15 @@ Instead of taking a value and computing more and more values each time, a good e
152171
# Here, cutoff is an arbitrary variable to know when to stop recursing
153172
function logarithmic(a::Array{Float64}, cutoff::Int64)
154173
if (length(a) > cutoff)
155-
logarithmic(a[1:length(a)/2], cutoff)
156174
logarithmic(a[length(a)/2+1:end], cutoff)
157175
end
158176
println(length(a))
159177
end
160178
```
161179
To be honest, it is not obvious that the provided `logarithmic` function should operate in $$\Theta(\log(n))$$ time, where $$n$$ is the size of `a`.
162180
That said, I encourage you to think about an array of size 8.
163-
First, we split it in half, creating 2 arrays of 4 elements each.
164-
If we split these new arrays, we have 4 arrays of 2, and if we split these by two we have 8 arrays of 1 element each.
181+
First, we split it in half and run the algorithm on one of them, creating an array of 4 elements.
182+
If we split the new array and run it on 1 of them, we have an array of 2, and if we split it by two and run on 1 we have an array of 1 element each.
165183
This is as far as we can go, and we ended up dividing the array 3 times to get to this point.
166184
$$3 = \log_2(8)$$, so this function runs with a logarithmic number of operations.
167185

0 commit comments

Comments
 (0)