-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
contentful.getEntries() throws "Error: unexpected end of file at Zlib._handle.onerror (zlib.js:370:17)" #112
Comments
Hi @andrewodri , |
Awesome, thanks @Khaledgarbaya! If it helps, this is the minimum Dockerfile I am using to reproduce the issue:
|
I used this docker image which is basically the same, I think, you can find them here |
@andrewodri , |
So strange... I tried the exact Dockerfile you sent, and I still get the issue... I removed all dependencies and still get inconsistent results. I tried running a number of times to gauge how often the call is failing, and I did get a slightly different error in one case:
I've tried installing with contentful defined in package.json, as well as install completely fresh. |
Just an update; the following code works no problem:
Which produces the following output:
However, the following code, which returns the exact same result to
This produces the following output:
|
@andrewodri , |
Another update: I ran this code in with the debugger active, and tried finding exactly where the error is throwing... It seems to be related timers associated with DNS resolution. (So possibly a race condition?) I added in the following code to try and dump a full stack, however, then stack is only one item deep which doesn't really help :P (Without the error handler, the execution stops at
And the fully stepped execution:
|
@andrewodri what happen if you do a simple curl command the get entries like curl -v -XGET -H 'Authorization: Bearer ACCESS_TOKEN' 'http://cdn.contentful.com/spaces/SPACE_ID/entries' this could be related to the CDN server in your region, I am not sure I have to check with our backend engineers |
@Khaledgarbaya That command seems to work fine; in fact, so does the following:
If I send a series of I noticed that this pull request (axios/axios#303) resolves the same issue we were having (axios/axios#435). It seems that contentful-sdk-core uses axios 0.9.1, and this patch was only included 0.12.0... Could we try creating a branch of contentful.js that includes the latest version of axios? I've built this locally and it working well, although the Docker build is taking some time... |
@Khaledgarbaya One more thing... This issue also appear in the request npm module, and was resolved with this pull request: request/request#2492 (comment) From the description:
It seems that axios is also using the zlib functions, but these are not passing in the flush parameters, https://github.com/mzabriskie/axios/blob/master/lib/adapters/http.js#L145. I tried changing this line in
|
hey @andrefs ,
|
@Khaledgarbaya Well it seems that the issue is partly 1) gzip inflation handling, and 2) environmental. We ran tests on other development machines, in our CI, and in our staging environments, and the issue is not occurring there. I suspect it may be be either an issue with the docker network bridging or our network content filtering truncating the data, or making modifications that aren't reflected in the checksum. Thanks so much for your help on this! |
@andrewodri any time, I will close this issue for and feel free to create a new issue whenever you have a problem to report |
debian:jessie-slim
image from https://hub.docker.com/_/debian/)I am building within a Docker container pulled from the official debian:jessie-slim image, with Node v6.9.4 installed from http://deb.nodesource.com/node_6.x. I have been getting the following error on on ~90% builds:
The simplest code to reproduce this issue is below; interestingly, when the
client.getContentTypes()
call is removed, the error occurs with noticeably less frequency:From what I can tell, this seems to be an issue when chunks of data are dropped from gzipped responses and/or gzip response aren't properly deflated. This would seem to align with the reduced error rate when the additional API call is removed. Here are a few issues that reported similar issues, with fixes referenced:
Some solutions have included more lenient gzip decompression, and providing an option to disable gzip.
The text was updated successfully, but these errors were encountered: