Status
[#861] Outage VPLS ring EQX-AM7
Posted: 2019-11-25 13:17Start: 2019-11-25 12:33:00
End : 2019-11-25 14:30:00
Affects: IP access Customers / IP transit Customers EQX-AM7 / Traffic EQX-AM7
During debugging it got very unstable to the point we have rebooted the entire stack.
We are still debugging as to what triggered the outage in the first place together with our vendor.
At the moment everything should be stable again but we continue to monitor the situation for the next couple of hours.
Update 13:30 CET:
Internet Customers may still experience issues with packetloss at this moment, our team is currently working around the clock on getting this solved as well.
Update 14:30 CET:
Also for Internet customers the issue should now be resolved.
Update 15:00 CET:
From investigation, the instability was present since the power outage of equinix AM7, during the black building test.
[#859] Outage Equinix AM7
Posted: 2019-11-21 22:57Start: 2019-11-21 22:09:00
End : 2019-11-21 22:35:00
Affects: Internet Access / Transits located on EQX-AM7
Equinix AM7 is a big connecting hub for Europe and the Netherlands and while our hosting datacenters did not have any issues but because this was such a big outage for a lot of party's involved at Equinix there will be many re-routes throughout the internet and a lot of packetloss happening or happened.
[#858] Packet Loss DBA
Posted: 2019-11-19 19:41Start: 2019-11-19 19:12:00
End : 2019-11-19 19:26:00
Affects: Traffic DBA
We have found the culprit to be with one of the core ports and have disabled this port returning everything to stable.
As our suspicion is a faulty optic, we have planned in a replacement of the optic. This will cause no downtime.
[#849] Outage Connection Databarn Capelle (DBC) - Equinix Amsterdam 7 (EQX-AM7)
Posted: 2019-11-12 14:31Start: 2019-11-12 14:31:00
End : 2019-11-12 14:45:00
Affects: Redundancy DBC
At this moment we are inspecting the issue and keep this post updated with our findings.
UPDATE 12-11-2019, 14:40:
Currently the connection is back online after 10 minutes. We are still investigating the root cause of signal loss. As the signal was lost, we suspect the issue at our dark fiber provider. We have contacted them to investigate at their end.
[#841] Outage Nedzone Packets
Posted: 2019-10-10 13:10Start: 2019-10-10 12:40:00
End : 2019-10-10 12:49:00
Affects: Traffic Nedzone
The issue seems to be with packets that were send/partially balanced over additional unused (disabled) ports, reset of the bridges on the ring-switches resolved the packetloss, BGP sessions are being re-established automatically.
As preventive measure NOC engineers will activate the currently unused (disabled) ports within the next 3 hours.
- [#861] 2019-11-25 » 2019-11-25 12:33 / 2019-11-25 14:30
- [#859] 2019-11-21 » 2019-11-21 22:09 / 2019-11-21 22:35
- [#858] 2019-11-19 » 2019-11-19 19:12 / 2019-11-19 19:26
- [#849] 2019-11-12 » 2019-11-12 14:31 / 2019-11-12 14:45
- [#841] 2019-10-10 » 2019-10-10 12:40 / 2019-10-10 12:49