Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore using gob or other encoder for hash input bytes #2383

Open
rainest opened this issue Mar 31, 2022 · 1 comment
Open

Explore using gob or other encoder for hash input bytes #2383

rainest opened this issue Mar 31, 2022 · 1 comment
Labels
area/perf Performance Related Issues nice-to-have

Comments

@rainest
Copy link
Contributor

rainest commented Mar 31, 2022

From some flame graph analysis, we spend a lot of time doing JSON operations. Most of these are unavoidable because the Kong admin API speaks JSON, but we spend a non-insignificant amount of time encoding a hash input at https://github.com/Kong/kubernetes-ingress-controller/blob/v2.2.1/internal/dataplane/deckgen/deckgen.go#L21

flame

There's no reason to use JSON here, as the hash is only used internally. As long the input properly represents our targetConfig struct as []byte, any encoding works. gob may be a viable, more efficient alternative.

While the change is simple, we would, however, want to ensure that we have a unit test in place to confirm that hash generation results in the same value for identical configs and different values for different config, and run a one-off performance benchmark with a range of targetConfig sizes.

@rainest rainest added area/perf Performance Related Issues nice-to-have labels Mar 31, 2022
@pmalek
Copy link
Member

pmalek commented May 30, 2023

I stumbled across this one when looking at #ask-kubernetes thread on internal slack.

Actually I was looking at this very problem recently and wrote a couple of benchmarks. These can be found at https://github.com/Kong/kubernetes-ingress-controller/compare/gojson?expand=1 and run via

go test -v -count 1 -run - -bench "Benchmark" -benchmem  ./internal/dataplane/deckgen

Results (which might not be all that scientific):

goos: darwin
goarch: arm64
pkg: github.com/kong/kubernetes-ingress-controller/v2/internal/dataplane/deckgen
Benchmark
Benchmark-10                       39012             31036 ns/op           11174 B/op         12 allocs/op
Benchmark_gojson
Benchmark_gojson-10                57757             20576 ns/op           11126 B/op         12 allocs/op
Benchmark_gob_crc32
Benchmark_gob_crc32-10             13621             88131 ns/op           30760 B/op        609 allocs/op
Benchmark_gob_sha256
Benchmark_gob_sha256-10            13190             92143 ns/op           30884 B/op        611 allocs/op
Benchmark_gojson_cache
Benchmark_gojson_cache-10           8889            129221 ns/op           62912 B/op       4295 allocs/op
Benchmark_gojson_nopool
Benchmark_gojson_nopool-10         57094             21154 ns/op           11182 B/op         13 allocs/op
Benchmark_gojson_pool
Benchmark_gojson_pool-10           59616             20038 ns/op            5830 B/op         13 allocs/op
PASS
ok      github.com/kong/kubernetes-ingress-controller/v2/internal/dataplane/deckgen     11.173s

Conclusions:

  • gob performs a lot of memory allocations which causes worse performance than json
  • using github.com/goccy/go-json yields a 50% improvement in time: from 31036 ns/op to 20576 ns/op with similar allocations and amount of memory allocated
  • github.com/mitchellh/hashstructure also causes a lot of memory allocations (probably because of all the reflection used?)
  • our best bet seems to be gojson with bytes.Buffer pooling which uses the same amount of time to complete but with 50% of memory allocated: from 11174 B/op to 5830 B/op

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/perf Performance Related Issues nice-to-have
Projects
None yet
Development

No branches or pull requests

2 participants