-
Notifications
You must be signed in to change notification settings - Fork 47
Seeing elevated network read ECONNRESET from AWS #10
Comments
In tandem also seeing:
|
I'm looking into this — our CDN isn't reporting any issues. Would you mind sharing the output of |
See also: npm/npm#12484 |
@blopker: I think you mean npm/npm#12484 – this is a separate repository. |
Yep, already fixed it 😄 |
@chrisdickinson sorry for the delay, here is the result of traceroute:
Also did mtr:
|
Guys, I've pretty much lost all my hair trying to figure this out. Finally stumbled upon this issue. Any progress? I can't deploy anything on any of my EC2 instances. Thank you in advance!
|
@achaudhry what AWS region are you in? I've been trying to debug this all day with no luck. We are in It seems to be intermittent, though more often than not. |
@nodesocket My instances are in us-west-2. |
Although I wasn't actually going to use |
@achaudhry is it 100% error or does it work sometimes? Also, what instance type are you using? Using SSD EBS? |
@nodesocket Worked once about 5 hours ago but 100% error since the last 4ish hours which is bizarre! I've tried on a couple of instances about 5-7 times on each. I have nothing fancy at the moment. Both instances are |
Your instance size may be the problem, 't1.micro' barely have any power and I/O. Would you be able to try a 'c4.large' just for a few minutes? Should only cost 15 cents or so. I've tried 't2.large', 'c4.2xlarge' and even 'c4.4xlarge'. |
Having the same problem on EC2 instance through Elastic Beanstalk |
We are discussing internally how best to get enough data to figure out the issue. We are successfully serving so many requests from all zones of ec2 we know its not a general registry issue. This makes it very tricky to move forward but we are kicking around some ideas. Hope to have some more information soon. If anyone comes up with a way to profile their requests to the registry and finds out more please let us know. |
I created a forum post on aws: https://forums.aws.amazon.com/thread.jspa?threadID=230574&tstart=0 Maybe that can get some attention from AWS side |
Quoting a suggestion from @othiym23 in npm/npm#9418 (comment) that hasn’t been mentioned yet in this thread:
|
Not sure if anything changed on AWS or registry side but I was just able to deploy 3 times in a row |
Just confirmed that
I'm leaning toward 1, because AWS instances were up and running and seem to be working correctly all besides npm. |
@nodesocket Agreed, it worked for me as well on the micro instance. Confused as to what was going wrong for two days straight. Would love to find out for future reference since that was not a very fun exercise... That said, I'm happy to test it on a better / slightly more powerful instance but since it's working now, I highly doubt the instance type was the issue? Thanks! |
@achaudhry I was blocked on very powerful instances. I don't think the issue was related to that |
traceroute from c3.2xlarge in us-west-2 traceroute to registry.npmjs.org (199.27.79.162), 30 hops max, 60 byte packets |
Don't know if it helps, but we're getting this error on Centos 6, but not 5 or 7. |
@chrisdickinson just started happening again. Seeing elevated connection timeouts consistently. Region: This is seriously affecting us, since all our of CI builds are failing.
|
Can confirm I'm once again having issues as well :( |
This issue is intermittent in our us-west-2 ec2 clusters, occurring daily and maybe for a few minutes at a time. Kicking the build a second time usually does the trick. Considering the number of people reporting issue, I doubt it's a local problem unless maybe something is effed with amazons network. There seem to be a high number of Amazon users reporting in. |
I wonder if this is a rate limit kicking in from Fastly, the reports in the last day all seem to be from AWS us-west. |
having the same issue on eu-west-1 |
started happening again. Anyone know of any workarounds until this is resolved? Can't get builds out! |
I can confirm that upgrading node from |
Just one more datapoint: 4.4.3 fixed it for us too. |
Fixes blocked deploys: npm/registry#10
This should also[1] fix the issues we've been seeing with intermittent build failures on Travis. [1]: npm/registry#10 (comment)
I had the same problem deploying to aws eb with npm 2.4.12. Changing registry to |
@nodesocket @soldair so after about 5 months all of a sudden the issue has resurfaced for me and it's happening consistently today. Anyone seeing it again? I remember last time I tried everything mentioned above and nothing worked for me but it randomly started working one day. I've again tried everything mentioned here but so far no luck. Anyone figured out the root cause? |
Node and npm client version? We haven't had and infrastructure or system changes recently that could cause this so we are in much the same place as before. |
@soldair We are seeing a bunch of these ECONNRESETs recently on Travis, with e.g. node 5.8.0 and npm 3.7.3 (but other versions of node & npm are also experiencing errors). |
@atungare we are not experiencing any downtime or service interuption as far as i can tell. we'll have to find a way to get stats-npm logs for your install at least an npm-debug.log to debug |
My team is seeing this on AWS today (see https://github.com/npm/registry/issues/112). I'll throw out a possibility: does npm registry (or Fastly) do any IP-based throttling? It seems fairly consistent that builds run "in the cloud" (on AWS or on Travis) are affected, and I'm assuming their requests to npm registry would all come from the same IP range. I'm wondering if rate limiting is kicking in. |
About 50% of our builds on our paid Travis-CI account fail with this error. node 4.1.2, npm 2.15.11. |
It's holding my work too. NPM is so broken - how the hell it became a kind of standard? |
Been seeing lots of npm
ERR! network read ECONNRESET
when invokingnpm install
.Does this indicate a CDN issue on npm's side? This is from multiple AWS instances in the
us-west-1
andus-west-2
regions.The text was updated successfully, but these errors were encountered: