Status

#791 Power outage d4.z1 Databarn Capelle

Posted: 2019-04-19 14:22

Start: 2019-04-19 12:52:00
End : 2019-04-19 13:58:00

Affects: Servers in d4.z1 DBC

At 12:52 CEST we had a power outage in rack d4.z1 in Databarn Capelle, causing several servers and one switch to go down.

We sent one of our engineers over right away to assess the situation.
He has replaced a power distribution unit which seems to be defective.

Currently the rack is back online and all servers are operational again.

#787 Darkfiber outage DBC - EQX-AM7

Posted: 2019-04-13 01:52

Start: 2019-04-13 01:00:00
End : 2019-04-14 03:00:00

Affects: Redundancy DBC

At 01:00 CEST a darkfiber between DBC (Datbarn Capelle) and EQX-AM7 (Equinix AM7) went down, carrying 400 Gbps traffic.

Currently all traffic is sent over the alternative path on our ring namely DBC - GSA while we are investigating the cause into this.

Update 2019-04-13 03:32 CEST:
We've spoken to the darkfiber provider euNetworks and this is because of a maintenance which was not communicated to us.

Update 2019-04-14 09:00 CEST:
Currently the line is back online and our redundancy is operational again.

#784 Outage LINX at GSA (Global Switch Amsterdam)

Posted: 2019-04-07 11:51

Start: 2019-04-07 11:00:00
End : 2019-04-15 07:00:00

Affects: Routing LINX

Today at 10:47 CEST, our uplinks with LINX at GSA (Global Switch Amsterdam).

We have investigated the issue at our side but concluded that the issue is either at LINX side, or at our wave service provider.

We have contacted our wave service provider at 11:00 CEST, but have not yet received any updates.

Currently, all traffic towards LINX is routed through our other ports at EQX-AM7 (Amsterdam).

Once we have an update, we will update this post.

Update 2019-04-07 12:47 CEST:
The wave service provider found out the issue causing our link to be down:
It is connected to the subsea fiber break PI 1-4858348241 between Leiston Zandvoort.

Update 2019-04-07 15:12 CEST:
Dear Customer, Please be informed that the fault is confirmed to be on 89.6km from our landing station in Netherlands. Currently the cable ship is on the port, preparing and loading all the spares needed for the restoration. No ETA or ETR. IMPORTANT: If you are a dark fiber customer, please SHUT DOWN your raman amplifiers. We need this step due to safety reasons.

Update 2019-04-07 16:28 CEST:
The ship is already prepared for the spares loading operation. The operation will start in 30 minutes. Once the ship is ready for sailing and have approval for it, we will provide tentative ETA to the fault location.

Update 2019-04-07 20:20 CEST:
Loading of the spares onto the ship is currently ongoing. In the meantime we are also waiting for approval in order for the ship to set sail to the fault location. As this is a time-consuming process it will take some time. Once we receive the approval we will share a tentative ETA to the fault location and possibly a tentative ETR.

Update 2019-04-08 1:42 CEST:
Loading of spares completed. Repair ship has left the port at 23:00 UTC.

Update 2019-04-08 10:50 CEST:
The Sovereign cable ship left port as planned at midnight last night. It is expected on the cable grounds at the fault location at approximately midnight tonight. Estimated date of repair completion 15/04/19. Next update will be available in 12 hours.

Update 2019-04-09 14:23 CEST:
As our lines are currently up, running on a backup line we do not experience any direct issues.
However, we do expect a switch over and keep this post updated for possible major incidents.
They have reported estimated date of repair completion 15/04/19.

Update 2019-04-15 07:30 CEST:
Please be informed that the cable fault was fixed already.
All traffic is recovered.

#761 Outage R2 DBC

Posted: 2019-02-25 12:10

Start: 2019-02-25 11:00:00
End : 2019-02-25 16:30:00

Affects: Traffic DBC

Currently we are experiencing an outage on our router 2 in DBC causing it to intermittently drop it's BGP neighbors causing some packetloss.

We are currently investigating the cause of this issue and will update this post accordingly.

Update 2019-02-25 17:28 CET:
The issue was followed back to the global access lists we use on our routers, we also noticed a much higher latency on the management of the routers. After removed the whole accesslist the latency returned to normal, after placing it back we saw it jump again in latency.

In the end we placed parts of it back while keeping an eye on the management layer, we will debug this more closely to see where the latency jump comes from when applying the entire access list.

#758 Outage AMS-IX at AM7

Posted: 2019-02-21 09:54

Start: 2019-02-21 01:30:00
End : 2019-02-21 11:28:00

Affects: Routing AMSIX

Currently we are experiencing an outage at our router in AM7 with the AMS-IX Peering platform.

Yesterday night AMS-IX performed a maintenance and we expect that this issue we are experiencing is caused by the maintenance.

AMS-IX is informed with additional details so they can investigate what happened.

As we have redundancy in place, our AMS-IX peering traffic is automatically rerouted over GSA. We do not expect any issues.

For transparency purposes we decided to publish this notification.

Update 2019-02-21, 11:28:
AMS-IX found out that the linecard they connected us to is defective and put us back on the primary path.

All traffic and BGP sessions of AMS-IX at AM7 are now restored.