All Notifications

#656 Updates to our Privacy Policy

Posted: 2018-05-23 16:30

Start: 2018-05-25 00:00:00
End : 0000-00-00 00:00:00

Affects: N/A

On the 25th of May 2018, the European Union (EU) will bring into force the General Data Protection Regulation (GDPR). GDPR guarantees for its citizens many rights to their digital privacy. In order to be compliant with this regulation we are updating our privacy policy. This will provide even more transparency for our services.

The updated versions of our agreements, policies and guidelines are available at our legal page, which you can find at https://www.nforce.com/legal - will provide you with more insight over what we do with your personal data.

Your security, as well as your privacy, has always been important to us. We will continue to stay committed to protecting your data.

#655 Wave service maintenance LINX EQX-AM7

Posted: 2018-05-22 07:50

Start: 2018-05-29 05:00:00
End : 2018-05-29 07:00:00

Affects: Routing LINX

Wave service provider which connects EQX-AM7 (Equinix AM7 Amsterdam) towards the LINX peering exchange in London is having a planned maintenance.

In case of a full outage, all LINX routed traffic is automatically routed over our other LINX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is "Your service is protected but a switch-hit might occur twice. We need to switch the service back to the Ulysses cable.".

Their reference is: 85406

For transparency purposes we decided to publish notification.

#654 Packetloss towards Bani Networks LTD (AS134084)

Posted: 2018-05-20 18:52

Start: 2018-05-20 18:52:55
End : 2018-05-20 18:52:55

Affects: Traffic towards Bani Networks LTD (AS134084)

We have received reports of packet loss towards Bani Networks LTD.

We have denied the path over peer Bharti Airtel LTD (AS9498) in the routing tables.

Best selected path is now going over Cogent.

#653 Router firmware upgrades

Posted: 2018-05-17 12:19

Start: 2018-05-24 01:00:00
End : 2018-05-30 07:00:00

Affects: (See message for more information)

During these times we will be performing firmware upgrades of our routers, to solve a part of the issues we have been seeing in the past which is explained at our NOC post: https://noc.nforce.com/notifications/item/649

Our time schedule is as follows:

24 May 2018 at 1AM CEST we will be performing firmware upgrade on the router at Global Switch Amsterdam (GSA)
Affected: IP Transit customers, Redundancy network.

25 May 2018 at 11 AM CEST we will be performing firmware upgrade on the router at Equinix-AM5 (AM5)
Affected: Peer British Telecom

26 May 2018 at 1AM CEST we will be performing firmware upgrade on the router at Equinix-AM7 (AM7)
Affected: IP Transit customers, Internet customers, Redundancy network.

29 May 2018 at 1AM CEST we will be performing firmware upgrade on the router 2 at Nedzone (NZS).
Affected: Redundancy network NZS.

29 May 2018 at 5AM CEST we will be performing firmware upgrade on the router 2 at Databarn Capelle (DBC).
Affected: Redundancy network DBC.

30 May 2018 at 1AM CEST we will be performing firmware upgrade on the router 1 at Nedzone (NZS).
Affected: Redundancy network NZS.

30 May 2018 at 5AM CEST we will be performing firmware upgrade on the router 1 at Databarn Capelle (DBC).
Affected: Redundancy network DBC.

#652 Wave service maintenance FranceIX

Posted: 2018-05-16 22:58

Start: 2018-06-09 00:00:00
End : 2018-06-09 06:00:00

Affects: Routing FranceIX

Wave service provider which connects Globalswitch towards the FranceIX peering exchange in Paris is having a planned maintenance.

All FranceIX routed traffic is automatically routed over our other peering exchanges or IP transit connections.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing splice box replacement works on our fiberoptical network. This planned activity will take place in Belgium."

Their reference: 1-3787161641

For transparancy purposes we decided to publish this notification.

#651 Wave service maintenance DECIX GSA

Posted: 2018-05-16 09:06

Start: 2018-06-10 00:00:00
End : 2018-06-10 06:00:00

Affects: Routing DE-CIX

Wave service provider which connects GlobalSwitch Amsterdam (GSA) towards Frankfurt is having a planned maintenance.

Their maintenance reason is " As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing mandatory cable works on our fiberoptical network. This planned activity will take place in Germany. "

Interoute has given this maintenance reference number:1-3794823251

All DE-CIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparancy purposes we decided to publish this notification.

#650 Wave service maintenance LINX GSA

Posted: 2018-05-15 08:11

Start: 2018-06-02 00:00:00
End : 2018-06-02 06:00:00

Affects: Routing LINX

Wave service provider which connects GSA (Global Switch Amsterdam) towards the LINX peering exchange in London is having a planned maintenance.

All LINX routed traffic is automatically routed over our other LINX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing hardware upgrade works on our transmission platform. This planned activity will take place in Netherlands.".

Their reference is: 1-3846472771

For transparency purposes we decided to publish notification.

#649 Router linecard crashes, vendor investigation (ongoing)

Posted: 2018-05-11 10:37

Start: 2018-04-01 00:00:00
End : 0000-00-00 00:00:00

Affects: Routing, primarely DBA

For transparency purposes we opened this case to keep you informed regarding an issue that causes crashing linecards in the routers.

During February and March we have been working with our vendor Brocade/Extreme to resolve packetloss on a 20x 10G linecard, which occurred when using more than 8 ports in a 160 Gbps lag (bundle). For this issue a work-around and fixes were implemented and till date this has been working. This was acceptable as these are the last linecards with 10G ports we use on our routing platform; they are marked/planned to be replaced by 100G linecards. All other routers and linecards in our hosting facilities DBC/NZS are already replaced with 100G models. The replacement for DBA was actually was planned to be done for last month, everything has been prepared and connect, however due to current new ongoing issue we have proponed this work.

Currently since ~1st of April we have noticed crashes of linecards and mgmt cards on Brocade/Extreme routers. First incidents were immediately reported to our vendor and a TAC case was created. During April we noticed 5 incidents at DBC, an incident at EQX-AM7. During April the TAC case has been escalated to the highest level possible with vendor. We are part of the critical customers program. During May the incidents increased further with less time apart between the incidents. So far we have counted 19 incidents on DBA during May. Our vendor has been made aware that failure to resolve forces us to break a 10-year partnership with them. Vendor has assured this case is handled with highest priority and all escalations possible are already done. It is currently in the hands of the engineers responsible for developing the code for these devices, if they cannot get this fixed then there is no one who can.

Our network is setup 100% redundantly in an active/active manner. Meaning redundant physical fiberpaths, redundant network chassis (routers and vpls) and also within these chassis we place everything redundant, thus double (or more) the amount of required psu's, redundant management cards and we spread the load of each LAG over multiple linecards. The latter (spreading the load of each LAG over multiple linecards) should have reduced the incidents to cause minimal issues (few millisecond). Unfortunately just before the linecard crashes, it seems to drop packets - if it were to crash immediately the impact would have been milliseconds, now the impact is seconds. As the traffic is coming in balanced over two routers, as well as going out balanced over two routers, the impact is reduced to roughly 1/4th connections. The incidents are recovered automatically without the need/presence/work from our NOC. This means that even though the incidents are there, at least it auto recovers when the card fails from the system and also auto-joins when it comes back. This means that no human interaction is required to recover during incidents.

This week we did patch the routers on DBA (based on the crash reports provided to Brocade), this seem to have resolved the management card crash. However the most pressing issue which is the linecard crashes are not yet (or not all of them) fixed.

So far it seems they have narrowed it down to their IPv6 code/routines. They are modifying the lab setup to start sending different packets in different scenario's to replicate the crashes, once they are able to replicate they can build a fix.

We hope that soon we can say that we are confident that the layer 3 routing platform in DBA is stable again, and everyone can enjoy the stability as they were used to from us before these incidents. Our apologies for the situation at hand, we can assure you that this situation is our highest priority - as well as with our vendor.


Update 2018-05-16 18.15:
We have implemented temporary partial fixes including an ACL blocking certain IPv6 packets at DBA. Since the the initial report of this post (2018-06-11) no more crashes were observed at DBA.

Last night at 00.50 there was a new crash at DBC R1, two linecards carrying 400 Gbps crashed within minutes apart. We applied the same ACL as applied on DBA, on all routers in our network.

#648 Transit provider GTT maintenance

Posted: 2018-05-10 06:50

Start: 2018-05-25 02:00:00
End : 2018-05-25 06:00:00

Affects: Routing GTT

We have received a notification of maintenance by one of our transit providers, GTT.

During their maintenance traffic will automatically reroute over our other transit providers. We do not expect any issues.

Their maintenance reason is " performing upgrades".

Their reference is: 2620079

For transparancy purposes we decided to publish notification.

#647 VPLS Outage EQX-AM7

Posted: 2018-05-06 23:32

Start: 2018-05-06 23:10:00
End : 2018-05-07 00:15:00

Affects: Fiber to the business Customers located on EQX-AM7, VPLS Customers EQX-AM7, Peering traffic R1 EQX-AM7

Currently there is an outage ongoing on the VPLS ring of EQX-AM7 where one of the switches in the stack rebooted itself.

The stack itself is setup redundantly where the standby took over it's place.

However due to a yet unknown reason a part of the VPLS ring went down even though the ports are still up.

We are currently investigating into the issue to find the cause and to resolve this and to collect data for our vendor.

Update 2018/05/07 00.05:
As even with trying different ways to get the remaining parts back online we decided to try a clean restart of the whole stack after collecting enough debug logs for our vendor.

After both switches were rebooted the remainder of the ports also came back online.

Because peering connections were still working on the EQX-AM7 router but had no way of being send over to the correct datacenters a higher latency or random packetloss may also have occurred.

At this moment the switches are stable again and we'll keep monitoring them while discussing this issue with our vendor.

#646 Firmware upgrade R1 DBA

Posted: 2018-05-06 15:13

Start: 2018-05-07 07:00:00
End : 2018-05-07 09:00:00

Affects: Traffic DBA

We planned in a maintenance for upgrading our router Firmware for R1 in DBA (Databarn Amsterdam) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R2 of DBA (Databarn Amsterdam), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.

For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

This firmware fixes one of the open issues we have with our vendor regarding linecard crashes.

#645 Firmware upgrade R2 DBA

Posted: 2018-05-06 15:10

Start: 2018-05-08 00:00:00
End : 2018-05-08 03:00:00

Affects: Traffic DBA

We planned in a maintenance for upgrading our router Firmware for R2 in DBA (Databarn Amsterdam) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R1 of DBA (Databarn Amsterdam), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.

For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

This firmware fixes one of the open issues we have with our vendor regarding linecard crashes.

Update 2018/05/07 00.43:
Due to the issues at EQX-AM7 this maintenance was postponed till 2018-05-08 at the same time.

#644 IP transit provider Core-Backbone

Posted: 2018-05-04 19:43

Start: 2018-05-15 00:00:00
End : 2018-05-15 06:00:00

Affects: Routing Core-Backbone

We have received a notification of a maintenance by one of our transit providers, Core-Backbone.

During their maintenance traffic will automatically reroute over our other transit providers. We do not expect any issues.

Their maintenance reason is "In the mentioned timeframe we will perform software upgrades at the following core-routers/core-switches:

ams10.core-backbone.com (Amsterdam1)
ams11.core-backbone.com (Amsterdam1)
ams12.core-backbone.com (Amsterdam1)
ams20.core-backbone.com (Amsterdam2)
prg10.core-backbone.com (Prague1)
prg11.core-backbone.com (Prague1)"

Their reference: 1988 and 1994

For transparancy purposes we decided to publish notification.

#643 Wave service maintenance LINX EQX-AM7

Posted: 2018-05-04 09:11

Start: 2018-05-18 00:00:00
End : 2018-05-18 07:00:00

Affects: Routing LINX

Wave service provider which connects Equinix AM7 towards the LINX peering exchange in London is having a planned maintenance.

All LINX routed traffic is automatically routed over our other LINX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is "We want to inform you about the upcoming maintenance on the sea cable between the UK and Europe from our partner.".

Their reference are: 84968, 84971

For transparancy purposes we decided to publish notification.

#642 Wave service maintenance LINX GSA

Posted: 2018-05-03 06:51

Start: 2018-05-16 00:00:00
End : 2018-05-16 06:00:00

Affects: Routing LINX GSA

Wave service provider which connects GSA (Global Switch Amsterdam) towards the LINX peering exchange in London is having a planned maintenance.

All LINX routed traffic is automatically routed over our other LINX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is " As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing software upgrade works on our transmission platform. This planned activity will take place in United Kingdom.".

Their reference is: 1-3773361961

For transparency purposes we decided to publish notification.

#641 Wave service maintenance LINX GSA

Posted: 2018-05-02 07:24

Start: 2018-05-14 00:00:00
End : 2018-05-14 06:00:00

Affects: Routing LINX

Wave service provider which connects GSA (Global Switch Amsterdam) towards the LINX peering exchange in London is having a planned maintenance.

All LINX routed traffic is automatically routed over our other LINX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing software upgrade works on our transmission platform. Infinera TIM software upgrade - Rollout of the new IQNOS 15.3.3 firmware to enable new features. This planned activity will take place in London, UK and Amsterdam , Netherlands".

Their reference is: 1-3825176901

For transparency purposes we decided to publish notification.

#640 Wave service maintenance DECIX EQX-AM7

Posted: 2018-05-01 07:40

Start: 2018-05-15 00:00:00
End : 2018-05-15 06:00:00

Affects: Routing DE-CIX

Wave service provider which connects Equinix AM7 towards the DECIX peering exchange is having a planned maintenance.

All DECIX routed traffic is automatically routed over our other DECIX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is "Fiber work".

Their reference is: 84646

For transparancy purposes we decided to publish notification.

#639 Wave service maintenance LINX EQX-AM7

Posted: 2018-05-01 07:38

Start: 2018-05-15 00:00:00
End : 2018-05-15 06:00:00

Affects: Routing LINX

Wave service provider which connects Equinix AM7 towards the LINX peering exchange in London is having a planned maintenance.

All LINX routed traffic is automatically routed over our other LINX peering exchange connection. We therefore do not expect any issues.

Their maintenance reason is "Fiber work".

Their reference is: 84649

For transparancy purposes we decided to publish notification.

#638 Packetloss traffic over NTT

Posted: 2018-04-28 21:10

Start: 2018-04-28 18:40:00
End : 2018-04-28 18:50:00

Affects: Routing NTT

We have received notification from NTT reporting issues with one of their routers causing packet loss for traffic through NTT.

Their notification: "r25.amstnl02.nl.bb experienced an issue where it's routing process crashed and did not gracefully fallback. This upstream device caused a brief outage while routing was reloaded and latency due to convergence. We are investigating with our vendor."

Their reference: VNOC-1-1698497805

For transparancy purposes we decided to publish this notification.

#637 Wave service maintenance DECIX GSA

Posted: 2018-04-25 18:54

Start: 2018-06-17 00:00:00
End : 2018-06-17 06:00:00

Affects: Routing DE-CIX

Wave service provider which connects GlobalSwitch Amsterdam (GSA) towards Frankfurt is having a planned maintenance.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing splice box replacement works on our fiberoptical network. This planned activity will take place in Netherlands. Please be advised your services will be affected as per table below between 16/Jun 22:00 UTC and 17/Jun 04:00 UTC. INTEROUTE would like to apologise for the inconvenience these works will cause to your business."

Interoute has given this maintenance reference number:1-3719422881.

All DE-CIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparancy purposes we decided to publish this notification.

#636 Wave service maintenance DECIX GSA

Posted: 2018-04-25 18:48

Start: 2018-05-27 00:00:00
End : 2018-05-27 06:00:00

Affects: Routing DE-CIX

Wave service provider which connects GlobalSwitch Amsterdam (GSA) towards Frankfurt is having a planned maintenance.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing mandatory splicebox works on our fiberoptical network. The work is related with replacing of old splice box, which can create unexpected problems in the future. This planned activity will take place in Netherlands, Amsterdam. Please be advised your services will be affected as per table below between 26/05/2018 - 22:00 UTC and 27/05/2018 - 04:00 UTC . INTEROUTE would like to apologise for the inconvenience these works will cause to your business."

Interoute has given this maintenance reference number:1-3719372761.

All DE-CIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparancy purposes we decided to publish this notification.

#635 Wave service maintenance FranceIX

Posted: 2018-04-25 18:39

Start: 2018-05-15 00:00:00
End : 2018-05-15 06:00:00

Affects: Routing FRANCEIX

Wave service provider which connects Globalswitch towards the FranceIX peering exchange in Paris is having a planned maintenance.

All FranceIX routed traffic is automatically routed over our other peering exchanges or IP transit connections.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing splice box replacement works on our fiberoptical network. This planned activity will take place in Belgium. Please be advised your services will be affected as per table below between 14/May 22:00 UTC and 15/May 04:00 UTC. INTEROUTE would like to apologise for the inconvenience these works will cause to your business."

Their reference: 1-3719293741

For transparancy purposes we decided to publish this notification.

#634 AMS-IX maintenance EQX-AM7

Posted: 2018-04-20 17:15

Start: 2018-04-24 00:00:00
End : 2018-04-25 04:00:00

Affects: Routing AMS-IX EQX-AM7

AMS-IX peering exchange is having a planned maintenance on the port directed towards Equinix-AM7.

Their maintenance reason is "AMS-IX engineers would like to continue the next phase of our maintenance started on 12th March 2018, previously detailed in the ticket:
"NOC24X7-12865 New Switch Migration (SLX-9850) At Equinix-AM7 Amsterdam (formerly Telecity-2)".
This operation was halted while we waited for the release of firmware for the new switch platform (SLX9850) with defect fixes.
After the new release came out and passed our testing process, we would like to continue with the following phase of the operation:
PHASE 3 - Failover 3xx group to the new 2xx group switches
Week 23rd - 26th April 2018
00:00 - 04:00 24th April 2018
- Move half of the member connections to new 2xx group switches ??
00:00 - 04:00 25th April 2018
- Move all member connections from the 3xx group to the new switches (2xx group)
- Leave all member connections running on the new switches to confirm stability"

All AMS-IX routed traffic is automatically routed over our other AMS-IX connection.

Because they do not mention to which half we belong we set the maintenance on both days.

#633 IP transit provider NTT

Posted: 2018-04-20 02:07

Start: 2018-05-03 02:00:00
End : 2018-05-03 05:00:00

Affects: Redundancy routing NTT

We have received a notification of a maintenance by one of our transit providers, NTT.

During their maintenance traffic will automatically reroute over our other NTT connection as well as over other transit providers. We do not expect any issues.

Their maintenance reason is "We will be performing a software upgrade affecting the service(s) listed below.
Users connected to this device will experience latency/downtime as the router is reloaded and routing is reestablished. "

For transparancy purposes we decided to publish notification.

Update 2018-05-02:
NTT has canceled this maintenance due to unforseen circumstances and will reschedule this maintenance to a later date.

#632 Wave service / Darkfiber maintenance

Posted: 2018-04-17 18:44

Start: 2018-05-04 00:00:00
End : 2018-05-04 06:00:00

Affects: Routing FRANCE-IX GSA / Redundancy NZS/BNR

Wave service provider which connects Globalswitch towards the FranceIX peering exchange in Paris is having a planned maintenance.

All FranceIX routed traffic is automatically routed over our other peering exchange or IP transit connections.

At the same time there will be a maintenance on the darkfiber connecting Globalswitch to Bytesnet Rotterdam (BNR) and Nedzone Steenbergen (NZS).

The traffic to BNR will automatically go over the darkfiber to DBC and traffic to NZS will automatically reroute over EQX-AM7.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing splice box replacement works on our fiberoptical network. This planned activity will take place in Netherlands. Please be advised your services will be affected as per table below between 03/May 22:00 UTC and 04/May 04:00 UTC. INTEROUTE would like to apologise for the inconvenience these works will cause to your business."

Their reference: 1-3719330071

For transparancy purposes we decided to publish this notification.

#631 DBA ring stack loop

Posted: 2018-04-04 13:36

Start: 2018-04-04 12:07:00
End : 2018-04-04 12:45:00

Affects: DBA layer 3 routing

At 12.07 one of the links between the ring network and the routers locally at datacenter DBA experienced major packetloss. This caused the BGP peers on the DBA routers to flap, causing layer 3 routing issues.

At 12.15 one of the ring switches, part of a stack, reloaded, during the reload BGP and traffic stabilized and recovered. When the device became an active member of the ring stack the same issues occurred. The reason for the reload was due to traffic loop on the backplane of the ring stack. Hence we expected the device to be faulty, or a bug to be present.

At 12.30 we have located that the issue was not with device (member) of the ring stack, the device itself was stable after the reload. We have debugged all four 100G ports connecting the ring stack to the DBA routers and found that one port was generating 'fake' traffic, on the side of the ring stack.

Debugging shows that the communication between the device and the tranceiver is working fine, however traffic was looping back within the transceiver to the device - or at least this was what our monitoring shows. This seems to have caused a flood of partial packets and as it kept looping (generating more and more) on the device, it at some point reloaded. The reload itself most likely due to management unable to communicate with its linecards/switchfabrics due to this loop. The reason the device did not turn of the defective port is because the transceiver communication was not affected and the transceiver itself did not report any errors.

At 12.40 the problematic port was disabled and the transceiver was physically removed from the ring stack.

At 12.45 the network was stabilizing, all BGP sessions are recovering.

At around 13.00 the transceiver was replaced by a spare, no issues shown on this transceiver.

#629 Core network maintenance DBA

Posted: 2018-04-04 10:20

Start: 2018-04-12 13:30:00
End : 2018-04-12 15:30:00

Affects: Zones 2,4 and 5 DBA

On Thursday, 12 April 2018 at 13:30 CEST, we will be performing a maintenance on our core network infrastructure.

We will be moving the physical paths of our zones to the new core network, which allows us for capacity upgrades in the future.

You may notice a few times some packet loss when the new paths are activated, however we will keep any interruptions at the lowest possible.

#628 Route modification towards ASn 10796 (Time Warner Cable)

Posted: 2018-03-22 06:48

Start: 2018-03-22 06:48:47
End : 2018-03-22 06:48:47

Affects: Traffic towards ASn 10796 (Time Warner Cable)

We have received reports of low throughput towards Time Warner Cable.

We have removed Cogent from the routing.

The next best path is going over Zayo.

#627 Seabone outage GSA

Posted: 2018-03-19 12:48

Start: 2018-03-19 12:22:00
End : 2018-03-19 20:57:00

Affects: Routing Seabone GSA

Currently our connection to Seabone located at Globalswitch Amsterdam has gone down without reason.

We've sent a trouble ticket to Seabone to request more information about this outage.

All Seabone traffic is automatically re-routed over our other connection at Equinix Amsterdam 5.

Update 21.05:
Please be Informed that there was a Faulty card on transmissive equipment replaced,

#626 Wave service outage LINX

Posted: 2018-03-17 17:31

Start: 2018-03-17 16:00:00
End : 2018-03-18 09:50:00

Affects: Routing LINX GSA

At 16:00 we lost connectivity with our links at GSA (Global Switch Amsterdam) towards LINX. We have contacted our wave service provider to inform the loss of signal.

During the outage, all traffic towards LINX will be rerouted over our LINX connection at Global Switch Amsterdam.

The outage is not within NFOrce Network but with a 3rd party.

Update 17:30: Our wave service provider updated us that the outage is classified as a major outage on their platform.

Update 20:04: We received an update from our wave service provider:

We would like to inform you that the main power on-site has been restored. The backup generator in place is now available again. However, some of the affected traffic is still down and we are suspecting a faulty card Ė damaged by the power outage. We have already engaged a Field Engineer to attend the site with the needed spare cards and to perform card replacement. ETA is yet to be confirmed. Next update will be provided within the next 45-60 minutes.

- Unfortunately, one of our links is affected by this blade, and is still down.

Update 20:26: Another update have been posted by our wave service provider: Estimated time of arrival for the engineer with spare is 21:00 UTC. Next update will be provided once the engineer is onsite.

Update 23:22:
Engineer is onsite. Next update will follow in 30-40 minutes.
Time of arrival for the spare is delayed due to bad weather conditions. New ETA is 0:30.

Update 2018-03-18 00:08:
Delivery of the spare cards is delayed due to the bad weather conditions in the UK. New ETA confirmed by the field team: 00:15 UTC.

Update 09.45:
Traffic has restored following the successful configuration of the equipment. Services are stable since 09:29 UTC and will remain under close monitoring during the day.

#625 Network issues DBA

Posted: 2018-03-15 12:25

Start: 2018-03-12 00:00:00
End : 2018-03-15 23:59:59

Affects: Routing DBA

Currently there is an issue in the DBA datacenter where both R1 and R2 stopped responding to their linecards effectively cutting off all communication with the outside world.

Currently the cards are back online and are synching the routes with our transit providers. Our vendor is currently debugging along with us to find the source of these issues.

Update 12.45:
Issue was with not with the linecards itself but with the management handling arp incorrectly.

Begin this week the core routers at our DBA server hosting facility were facing issues. Shortly after the start of these incidents we implemented a workaround ourselves to prevent the routers from having to deal with the problematic traffic causing this issue.

The vendor has been working on debugging these issues since that time, using tech-logs/captures as well with live sessions. Today the vendor came with a solution suggestion. This was implemented and for a short period of time this solution did seem to improve the situation. Shortly after the situation was worsened and BGP sessions included the ones to our IP transits started to flap. We immediately removed their 'fix' and implemented our workaround again.

We have filed an official complaint with the vendor of the routers (Extreme, fka Brocade). This complaint means they have to escalate the case to the highest level of support as well as management for investigation.

Apologies for the issues in the past days on the routing in this facility. Please rest assured we are doing everything in our power to have them come up with a fix themselves. However awaiting their investigation we will leave our own workaround, which proven to work stable, in place.

Update 19.00:
Issue seems to be back slightly. Brocade is however already working on the routers by live session since 17.30. We have disabled some capacity to regain stability. When we know more, we will post an update.

Update 19.48:
Additionally we have located a mac-address causing issues in vlan 210/211/212/215.

Update 2018-03-16 04.00:
Since last update Brocade Engineers have been debugging live and will discuss gathered information with their team.

Update 14.40:
Brocade is still debugging in their lab / discussing information with their team. So far everything is still stable but not resolved.

Update 2018-03-17 15.45:
We have analysed several recommendation changes provided by vendor. We are applying some of them to the configuration. We will monitor the effects closely.

Update 16.30:
So far the first changes seem to have improved stability. Before deploying further changes we will keep this as-is for the time being to make sure it is stable.

Update 2018-03-19 11.20:
All has been stable with last changes applied. We will now proceed with additional configuration to increase continious stability.

Update 12.15:
Last week the routing platform at DBA suffered from a serie (~4) of, relatively short but very impactfull, packet-forwarding issues. Vendor has been debugging alongside to our NOC team to find the cause and a solution for this issue. A work-around was implemented after the first few incidents. On saturday a more definate fix was implemented. This has been tested and the routing platform in DBA has been stable for ~4 days. We are hereby are closing this incident, we will however keep working with vendor to analyse the incident further over the next couple of weeks.

#624 Wave service maintenance DECIX

Posted: 2018-03-14 16:59

Start: 2018-03-20 23:00:00
End : 2018-03-21 07:00:00

Affects: Routing DE-CIX

Wave service provider which connects GlobalSwitch Amsterdam (GSA) towards Frankfurt is having a planned maintenance.

Their maintenance reason is "INTEROUTE has received notification that our supplier will perform Cable Works on their network. These works have been scheduled to take place between 20/Mar 22:00 UTC and 21/Mar 06:00 UTC. We hereby notify you that these works will cause your service(s) to be affected as per the table below. These works will be performed in Germany. INTEROUTE would like to apologise for the inconvenience these works will cause to your business."

Interoute has given this maintenance reference number:1-3605150347.

All DE-CIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparancy purposes we decided to publish this notification.

#623 Performance issues DBA

Posted: 2018-03-13 10:22

Start: 2018-03-13 10:22:50
End : 2018-03-13 14:40:00

Affects: routing DBA

In follow up of the issues from yesterday Extreme Networks (fka Brocade) is debugging the issues on our routers located in DBA.

While we are debugging this with them we try to do this in the least intrusive way possible.

It is however possible with times you notice some packetloss while actively trying to debug and solve the current issues.

We apologize for any inconveniences causes and ask you to bear with us while resolving the issues at hand.

2018-03-13 14.40:
Vendor has finished up gathering logs of their debug session.

#622 Firmware upgrade R1 EQX-AM7

Posted: 2018-03-09 07:17

Start: 2018-03-24 02:00:00
End : 2018-03-24 04:00:00

Affects: Traffic routed through EQX-AM7, internet customers connected at EQX-AM7

We planned in a maintenance for upgrading our router Firmware for R1 in EQX-AM7(Equinix Amsterdam) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R1 of GSA(Global Switch Amsterdam), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.
For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

#621 Firmware upgrade R1 DBA

Posted: 2018-03-09 07:14

Start: 2018-03-12 21:45:00
End : 2018-03-12 22:45:00

Affects: Traffic DBA

We planned in a maintenance for upgrading our router Firmware for R1 in DBA(Databarn Amsterdam) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R2 of DBA(Databarn Amsterdam), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.
For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

UPDATE 2018-03-12: Due to switching/performance issues we have scheduled the maintenance to be done with highest urgency. We will start the upgrade progress today after router 2 is completed.

#620 Firmware upgrade R1 DBC

Posted: 2018-03-09 07:11

Start: 2018-03-21 07:00:00
End : 2018-03-21 08:30:00

Affects: Traffic DBC

We planned in a maintenance for upgrading our router Firmware for R1 in DBC(Databarn Capelle) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R2 of DBC(Databarn Capelle), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.
For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

UPDATE 2018-03-21 7:32: While we started the reload, we have noticed that the redundancy did not work as intended causing a complete outage on DBC. We are currently restoring the traffic and inspecting why the fail over did not function correctly.

UPDATE 2018-03-21 7:57: We have inspected the issue and resolved it, as well we have tested the fail-over and the fail-over working now as intended.
We will continue on with the firmware upgrade.

UPDATE 2018-03-21 9:20: The firmware upgrade is completed and we have no longer experienced any issues with the fail-over.

#618 Firmware upgrade R1 NZS

Posted: 2018-03-09 07:06

Start: 2018-03-26 07:00:00
End : 2018-03-26 08:30:00

Affects: Traffic NZS

We planned in a maintenance for upgrading our router Firmware for R1 in NZS(Nedzone Steenbergen) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R2 of NZS(Nedzone Steenbergen), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.
For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

UPDATE 2018-03-19: Due to ongoing debug sessions on DBA, we have moved this maintenance to 2018-03-26 at 7:00 AM CET

#617 Firmware upgrade R1 GSA

Posted: 2018-03-09 07:03

Start: 2018-03-31 02:00:00
End : 2018-03-31 04:00:00

Affects: Traffic routed through GSA, internet customers connected at GSA

We planned in a maintenance for upgrading our router Firmware for R1 in GSA(Global Switch Amsterdam) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R1 of EQX-AM7(Equinix Amsterdam), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.

For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

Update 2018-03-15: Due to current ongoing issues with DBA, We and Extreme Networks (fka Brocade) will be focussing on the solving the issues at DBA.

We have moved the maintenance to 31 March 2018, at 2:00AM CET

To for those who are concerned, we would like to note that GSA is not and will not be affected by the issues we have and are seeing at DBA.

#616 Firmware upgrade R2 DBA

Posted: 2018-03-09 07:02

Start: 2018-03-12 20:15:00
End : 2018-03-12 21:15:00

Affects: Traffic DBA

We planned in a maintenance for upgrading our router Firmware for R2 in DBA(Databarn Amsterdam) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R1 of DBA(Databarn Amsterdam), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.

For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

UPDATE 2018-03-12: Due to switching/performance issues we have scheduled the maintenance to be done with highest urgency. We will start the upgrade progress today at 20:15.

#615 Firmware upgrade R2 DBC

Posted: 2018-03-09 07:01

Start: 2018-03-14 07:00:00
End : 2018-03-14 08:30:00

Affects: Traffic DBC

We planned in a maintenance for upgrading our router Firmware for R2 in DBC(Databarn Capelle) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R1 of DBC(Databarn Capelle), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.

For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

#614 Firmware upgrade R2 NZS

Posted: 2018-03-09 06:56

Start: 2018-03-12 07:00:00
End : 2018-03-12 08:30:00

Affects: Traffic NZS

We planned in a maintenance for upgrading our router Firmware for R2 in NZS(Nedzone Steenbergen) to the latest firmware.

We apologize for the short notice and unfortunately due to the urgency we cannot change the planned date.

The traffic will be rerouted through R1 of NZS(Nedzone Steenbergen), if you have redundant BGP sessions/links, only one session will go down and the traffic will go through your other session/link.

For those who do not have redundant connection through this router, may experience an outage up to 1 hour.

#613 Wave service maintenance

Posted: 2018-03-08 07:55

Start: 2018-03-23 23:00:00
End : 2018-03-24 05:00:00

Affects: Routing DE-CIX

Wave service provider which connects Global Switch Amsterdam (GSA) towards Frankfurt is having a planned maintenance.

Their maintenance reason is " As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing cable rerouting works on our fiberoptical network. This planned activity will take place in Netherlands."

Interoute has given this maintenance reference number:1-3599896285.

All DE-CIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparancy purposes we decided to publish this notification.

#612 Flapping 100gbit Linecard R1 DBC

Posted: 2018-03-07 14:12

Start: 2018-03-07 13:32:00
End : 2018-03-07 14:04:00

Affects: Traffic DBC

Router 1 at Databarn Capelle (DBC) experienced a flap on one of the 100gbit linecards. This problem started 13:32.

We directly sent out an engineer to replace the linecard. The linecard was replaced at 14:04, and the network looks stable again.

We will keep monitoring the router for the next couple of hours.

#611 Wave service outage DECIX

Posted: 2018-03-06 18:52

Start: 2018-03-06 17:52:00
End : 2018-03-06 19:15:00

Affects: Routing DECIX EQX-AM7

At 5:52 PM we lost connectivity with one link at Equinix AM7 (Amsterdam) towards DECIX. Our wave service provider has notified us about an outage within their network.

During the outage, all traffic towards DECIX will be rerouted over our DECIX connection at Global Switch Amsterdam.

The outage is not within NFOrce Network, it is with a carrier.

Their current reason for outage is:
We have a major outage ongoing due to a power outage on the POP in Wiesbaden-Naurod, Germany affecting your service(s). An engineer team is on it's way to the POP to investigate the exact situation. Updates will follow as soon as possible.

Update 7.40 PM CET:
At this moment our port is up again and we've also received an update from our wave service provider:

The POP is without power since 16:19 CET. First, the backup batteries kicked in and no customers and backbone links were affected. At 18:03, the batteries were empty and diesel should have taken over. This didnít happen, and now the engineer activated the diesel, which restored your service(s). It's not known until what time the diesel will last, but we are doing our utmost to either get confirmation that more diesel is on the way or the issue will be resolved before the diesel will run out. We will update you accordingly.


Update 00.19:
Since 19.15 last night (after approx 1.5 hours) the line was restored by KPN.

#609 IP transit provider Core-Backbone

Posted: 2018-03-01 20:46

Start: 2018-03-02 00:00:00
End : 2018-03-02 05:00:00

Affects: Routing Core-Backbone

We have received a notification of a maintenance by one of our transit providers, Core-Backbone.

During their maintenance traffic will automatically reroute over other transit providers.

Their maintenance reason is "In the mentioned timeframe we will perform an EMERGENCY REPLACEMENT of the following core-router/core-switch:
nbg24.core-backbone.com (Nuremberg3)".

For transparancy purposes we decided to publish notification.

#608 Fiber @ work maintenance

Posted: 2018-02-27 19:02

Start: 2018-03-08 23:00:00
End : 2018-03-09 01:00:00

Affects: Fiber @ Work customers Brightfiber

Fiber provider Brightfiber is having a planned maintenance.

This provider is only used for Fiber @ Work connectivity and does not affect any hosting or ip transit services.

Reason provided by provider:
Preventive maintenance is done on the Brightfiber network on the 8th of March 2018.

This maintenace affects the following industrial terrains:Pijnacker, Moerkapelle, Waddinxveen, Berkel en Rodenrijs

During the maintenance there will be a 5 - 10 minute outage on connections.


#605 DECIX maintenance EQX-AM7

Posted: 2018-02-22 15:11

Start: 2018-03-28 00:00:00
End : 2018-03-28 06:00:00

Affects: Routing DECIX EQX-AM7

DECIX peering exchange is having a planned maintenance on the port directed towards Equinix-AM7.

Their maintenance reason is "We will commence maintenance work to upgrade several of our nodes to improve redundancy. Therefore, we have to move your current connection. The work will cause a temporary downtime for about 5 minutes per port.

The migration is split into several sessions."

All DE-CIX routed traffic is automatically routed over our other DECIX connection.

Update 2018-03-18:
This maintenance was postponed a week, by Decix with the reason given:

Due to delays during the installation of our new precabling, we need to shift the maintenance for one week, please see the revised date above.

The timeframe has been updated accordingly.

#604 Power feed maintenance Equinix AM5

Posted: 2018-02-19 15:16

Start: 2018-03-27 07:30:00
End : 2018-03-27 15:30:00

Affects: Power redundancy Equinix AM5

This maintenance does not affect any hosting or ip transit services.

Following NOC maintenance #581 earlier this year Equinix will be placing the feed back on the main busbar after taking temporary measures in the earlier maintenance.

Below you can find the maintenance message which was sent to us:
Please be advised that Equinix and our approved contractor will be performing remedial works to migrate several sub-busbar sections back from there temporary source to the replaced main busbar which became defective as reported in incident AM5 - [5-123673165908].

During the migration, one (1) of your cabinet(s) power supplies will be temporary unavailable for approximately six (6) hours. The redundant power supply(s) remains available and UPS backed.


#603 Wave service maintenance FranceIX

Posted: 2018-02-19 14:19

Start: 2018-03-16 23:00:00
End : 2018-03-17 05:00:00

Affects: Routing FRANCEIX

Wave service provider which connects Globalswitch towards the FranceIX peering exchange in Paris is having a planned maintenance.

All FranceIX routed traffic is automatically routed over our other peering exchanges or IP transit connections.

Their maintenance reason is "As part of our commitment to continually improve the quality of service we provide to our customers, we wish to inform you INTEROUTE will be performing splice box replacement works on our fiberoptical network. This planned activity will take place in Belgium. Please be advised your services will be affected as per table below between 02/Mar 22:00 UTC and 03/Mar 04:00 UTC. INTEROUTE would like to apologise for the inconvenience these works will cause to your business."

Their reference: 1-3475945451

For transparancy purposes we decided to publish this notification.


Update 2018-03-02 09.45:
Rescheduled by Interoute to 2018-03-16 23.00 - 2018-03-17 05.00.

#602 IP transit provider NTT

Posted: 2018-02-16 19:14

Start: 2018-03-02 02:00:00
End : 2018-03-02 05:00:00

Affects: Redundancy routing NTT

We have received a notification of a maintenance by one of our transit providers, NTT.

During their maintenance traffic will automatically reroute over our other NTT connection as well as over other transit providers. We do not expect any issues.

Their maintenance reason is "Users connected to this device will experience downtime during this maintenance window, We will be performing performing migrations affecting the service(s)".

For transparancy purposes we decided to publish notification.