All Notifications

[#971] Urgency reload of R2 NZS (Maintenance)

Posted: 2021-10-21 07:23

Start: 2021-10-21 07:21:00
End : 2021-10-21 10:19:00

Affects: Redundancy NZS

As IPv6 continues to give issues, we are hereby reloading router 2 to apply new settings to combat the ongoing issues of IPv6.

We will make sure that the traffic falls back on R1 before the reload.

Update 9:21:
As the router R2 is back in production, we also will be reloading R1 to bring these settings in effect.

Update 10:18:
Both routers are reloaded with new settings and confirmed that the issue is resolved.

[#970] Emergency power maintenance DBA (Maintenance)

Posted: 2021-10-19 16:25

Start: 2021-10-20 07:00:00
End : 2021-10-20 13:00:00

Affects: Databarn Amsterdam

We got a notice today from the datacenter that the power company Liander is doing an emergency maintenance tomorrow to repair a 10Kv ring in Amsterdam.

Initially this was planned for 2022 but they moved it way back to 20th of October to be able to guarantee a good working electricity grid.

The datacenter themselves were also only informed about this today.

We asked for some assurances and they assured us that no outages are expected as they have UPSsses, generators and have a company bring an extra generator as well.

We will be monitoring the situation closely tomorrow.

[#969] Router 2 NZS MGMT Failover (Status)

Posted: 2021-09-29 21:53

Start: 2021-09-29 21:17:00
End : 2021-09-30 08:30:00

Affects: Traffic NZS

At 21:17 we noticed a management card flap at our second router at NZS.

We are currently investigating into this as our BGP are loading out again.

Update 01:10:
We continue to see the management cards swap (both the active / standby) on this router at random intervals. Router 1 is carrying all the customer load at the moment while we debug this issue.

Update 10:30:
We have replaced the management cards of R2 and monitored the situation. After replacing the management cards, we have no longer seen any swapping between active/standby card.

We expect the issue is resolved however we still continue to monitor the situation during the day.

[#968] Emergency reload R1 EQX-AM7 (Status)

Posted: 2021-09-14 09:01

Start: 2021-09-14 09:00:00
End : 2021-09-14 10:25:00

Affects: FTTB/FTTH Customer, IP transit Customers

Due to a range of issues we are going to do an emergency reload of our router in EQX-AM7.

Internet Customers will experience a loss of internet as well as customers having only one BGP session. Customers having a redundant BGP session with us should not experience any issues.

We are aware it is the start of a workday for most but due to the on-going issues there is little choice left for us to do it at a very short timespan, we will try our best to keep the downtime to an absolute minimum.

Update 10:25
We have finished the reload and restored BGP sessions. The main cause for this emergency reload was the router no longer propagating new routes nor pruning old routes.

[#967] Darkfiber outage NZS / BNR (Status)

Posted: 2021-09-04 12:49

Start: 2021-09-04 12:00:00
End : 2021-09-04 21:00:00

Affects: Redundancy BNR / NZS

Around 12:00 we've noticed our connections from GSA towards Nedzone Steenbergen (NZS) and Bytesnet Rotterdam (BNR) went down.

Our darkfiber provider GTT has informed us of a fibercut between Amsterdam and Rotterdam, as both BNR and NZS makes use of this darkfiber they have had a loss of redundancy.

All traffic for NZS is re-routed over our darkfiber with EQX-AM7 / all traffic for BNR is re-routed over DBC.

Wave customers between NZS and GSA are currently down.

Update 1: 2021-09-04 10:23 UTC
Please be advised that the GTT Infra NOC detected an interruption occurring on our network between Amsterdam & Rotterdam at 09:18 UTC. Initial investigation has detected a potential fiber cut occurring. FLM teams have been engaged and are en route with an ETA of 90 mins to site.

Update 2: 2021-09-04 11:29 UTC
FLM teams report that they have now reached site & are beginning their investigations.
Once the OTDR has been shot crews will travel to the break point & ETTR will be made available following a damage assessment.

Update 3: 2021-09-04 12:23 UTC
Crews on site have shot the OTDR & discovered the location of the break to be occurring approx 450 meters outside of Amsterdam. Teams are en route to the break point now with an ETA on site set at roughly 60 minutes.

Update 4: 2021-09-04 14:05 UTC
Crews have now arrived at the break point & discovered the fiber impacted by 3rd party drilling activity on site. A damage assessment & repair plan is being formulated by the FLM crew & civils work will need to be undertaken to excavate the damaged cable. ETTR will become available once complete.

Update 5: 2021-09-04 15:15 UTC
Crews report that civils crews are now on site & are preparing to excavate the damaged fiber. Repair plan will include the introduction of two new handholds along the route of the fiber. ETTR will become available once civils work has been completed & the additional infrastructure deployed to provide a safe route around the ongoing drilling work.

Update 6: 2021-09-04 16:25 UTC
Crews report that excavation work on site is progressing well with splicing to begin at a tentative time of 18:30 UTC.

Update 7: 2021-09-04 17:42 UTC
Crews on site report that excavation work has now completed & they are prepping the fiber for splicing which will commence momentarily.

Final Update: 2021-09-04 18:56 UTC
FLM teams confirm that all fibers have now been spliced & the GTT NOC can confirm that all alarms have cleared with services restoring.
The actions taken by the team on site will constitute a permanent repair & no further disruption to service is anticipated.

[#966] Eurofiber FTTB Internet Outage (Status)

Posted: 2021-08-12 11:20

Start: 2021-08-12 10:50:00
End : 2021-08-12 17:22:00

Affects: Traffic Internet Customers

We've became aware of multiple issues affecting a part of our Internet customers and are currently investigating.

Once we have an update it will be posted here.

Update 11:45: An engineer has been dispatched to replace a possibly faulty optic.

Update 14:50: An engineer from the external party is underway to troubleshoot the link.

Update 17:22:
At the moment our port is up again, we are going to request a formal RFO with Eurofiber since a downtime of 8 hours is not normal for the issue we were experiencing.

[#965] Packet loss NZS (Status)

Posted: 2021-07-27 04:39

Start: 2021-07-27 02:00:00
End : 2021-07-27 04:00:00

Affects: Traffic in NZS

We've noticed some packet loss in NZS.

One of the optics in a bundle of multiple Hundred Gbit optics started giving issues without going down causing packet loss, because of it not going down the issue was hard to pin-point.

We have now disabled the troubling link and will schedule a debug session of this link without affecting production traffic.

[#964] Maintenance Transit TI Sparkle (Maintenance)

Posted: 2021-07-01 08:55

Start: 2021-07-07 09:00:00
End : 2021-07-07 17:00:00

Affects: Traffic towards TI Sparkle

We have planned an upgrade of network capacity towards TI Sparkle.

During this maintenance all TI Sparkle traffic will be automatically rerouted over our other transits and peering connections.

[#963] Packetloss NZS (Nedzone) Network (Status)

Posted: 2021-06-09 14:23

Start: 2021-06-09 12:00:00
End : 2021-06-09 12:30:00

Affects: Routing NZS (Nedzone) Datacenter Steenbergen

Today we had packet loss in our datacenter Nedzone Steenbergen.

After investigation we have noticed a hardware defect in one of the linecards.
The card was only using limited memory space for the routes installed (1M vs 2M).

The issue was not clear at first as everything was looking correctly including optical values. After further investigation we see that the card is not functioning correctly because of the above reason and have currently disabled the ports affected until we have a fix at hand.

[#962] Resolvers being DDOSsed (Status)

Posted: 2021-06-03 13:50

Start: 2021-06-03 13:00:00
End : 2021-06-03 15:35:00

Affects: DNS Resolvers

Currently we are seeing a DDOS against our DNS resolvers, as many servers are using these you may be having issues as well.

While we are currently focusing on solving these issues a solution to get it solved instantly would be to set any of the resolvers from one of the following:

Google 8.8.8.8,8.8.4.4
OpenDNS 208.67.222.222,208.67.220.220
Cloudflare 1.1.1.1

Update 15:35:
Currently DDOS is mitigated, we will keep monitoring the situation.

[#961] Darkfiber maintenance NZS - GSA (Maintenance)

Posted: 2021-06-01 14:15

Start: 2021-06-17 23:00:00
End : 2021-06-18 07:00:00

Affects: Redundancy NZS

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Globalswitch Amsterdam (GSA) is having a planned maintenance. The connectivity between NZS and GSA will automatically reroute over our EQX-AM7 darkfiber during this maintenance.

Customers having their own waves from NZS to GSA will experience an outage during this time.

[#960] Darkfiber maintenance NZS - EQX-AM7 (Maintenance)

Posted: 2021-06-01 14:12

Start: 2021-06-04 00:10:00
End : 2021-06-04 06:00:00

Affects: Redundancy NZS

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between NZS and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

Customers having their own waves from NZS to EQX-AM7 will experience an outage during this time.

[#959] Darkfiber maintenance DBA - EQX-AM7 (Maintenance)

Posted: 2021-05-17 16:02

Start: 2021-05-21 00:00:00
End : 2021-05-21 07:00:00

Affects: Redundancy DBA

Darkfiber provider GTT which connects Databarn Amsterdam (DBA) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between DBA and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

[#958] Power outage DBC (Databarn Capelle) (Status)

Posted: 2021-05-10 05:15

Start: 2021-05-10 12:50:00
End : 2021-05-10 17:00:00

Affects: All services DBC (Databarn Capelle)

We are currently investigating an issue with the power distribution at DBC. The power started to flap at 12:50 and went completely down at 13:00.

We have contacted the datacenter for more information on the issue and are waiting for a response.

Update 13:29:
There is an outage reported by Stedin, we have received notification of an issue in a 100kv substation. More information can be found at https://www.stedin.net/storing-en-onderhoud/overzicht .

Update 17:17:

We received a message from the datacenter:

"We just had a major impact in our power supply due to a problem with Stedin.

The power was cut of at 12:34 and returns after several seconds. The next problem that occurred was a low voltages on 1 feed

https://www.stedin.net/storing-en-onderhoud/storingshistorie

The ups stabilised the drops as it kept on coming and going. For safety measurements we tried to bypass the UPS and put the full load on our generators. But since the power did not fully dropped it declined to bypass. Due to an error the power was cut of for several minutes. "

[#957] Rack down in DBA (Status)

Posted: 2021-04-10 09:24

Start: 2021-04-10 08:43:00
End : 2021-04-10 11:55:00

Affects: Rack a1.z4 DBA

As of 08:43 a rack in Databarn Amsterdam (DBA) has gone down.

Our engineers are currently investigating the issue.

Our sincere apologies for this inconvenience.

When we have more information on the exact issue we'll update the noc post here.

UPDATE 11:55: Switch has been replaced and connectivity should be restored. If your servers' are still offline, please make a support ticket.

[#956] Outage cooling systems DBA (Status)

Posted: 2021-03-30 23:58

Start: 2021-03-30 23:30:00
End : 2021-03-31 01:00:00

Affects: Services DBA

We are experiencing since 23:30 an outage of the cooling systems in Databarn Amsterdam (DBA).

Currently, engineers of the datacenter are resolving the issue.

Systems may affect shutdown due to overheating.

We are chasing them for an ETR and solution.

Update 00:10:
Currently a backup system has been started to sent as much cold air into the rooms as possible while the vendor is debugging the primary cooling system.

Update 00:40:
Currently we see a drop in temperatures throughout all dc rooms, since all hot air needs to be replaced by cold air it might take time before it returns to previous temperatures.

Update 00:55:
At the moment the primary and backup cooling is running to cool off the datacenter as quicky as possible and we've seen temperatures dropping throughout all rooms back to the normal values.

In case you have one or more servers that went into a thermal shutdown please let us know by emailing support@nforce.com so we can follow up for you.

Our sincere apologies for this inconvenience.

When we have more information on the exact issue with the primary cooling we'll update the noc post here.

[#955] Maintenance Transit NTT (Maintenance)

Posted: 2021-03-23 14:05

Start: 2021-04-01 02:00:00
End : 2021-04-01 05:00:00

Affects: Traffic towards NTT

We have received a notification of a maintenance by one of our transit providers, NTT.

During their maintenance all NTT traffic will be automatically rerouted over our other transits and peering connections.

Their maintenance reason: Services migration

For transparency purposes we decided to publish this notification.

[#954] Maintenance Transit NTT (Maintenance)

Posted: 2021-03-23 13:52

Start: 2021-03-30 02:00:00
End : 2021-03-30 05:00:00

Affects: Traffic towards NTT

We have received a notification of a maintenance by one of our transit providers, NTT.

During their maintenance all NTT traffic will be automatically rerouted over our other transits and peering connections.

Their maintenance reason: Services migration

For transparency purposes we decided to publish this notification.

[#953] Packet loss DBC (Status)

Posted: 2021-03-20 09:40

Start: 2021-03-20 08:45:00
End : 2021-03-20 09:30:00

Affects: Traffic in DBC

We've gotten reports of packet loss in DBC.

It appears an optic has failed between our core-switch and one of our routers, causing packet loss.

We have properly disabled the troubling link and will schedule a replacement in the oncoming week.

[#952] Packetloss towards Facebook (AS32934) (Adjustments)

Posted: 2021-02-15 09:19

Start: 2021-02-14 14:40:00
End : 2021-02-14 14:40:00

Affects: Traffic Facebook

We have received reports of packetloss towards Facebook (AS32934)

We have removed the path over LINX in the routing tables.

The best selected path is now over Cogent

[#951] Router Reload GlobalSwitch Amsterdam (Maintenance)

Posted: 2021-02-11 10:50

Start: 2021-02-28 00:30:00
End : 2021-02-28 04:30:00

Affects: Traffic GSA

Because of some maintenance related tasks we will be reloading our GSA router on the 28th of February at 0:30.

Our peering traffic will automatically flow over our EQX-AM7 router.

Internet and Transit customers only having a connection to this router will experience some downtime during the reload.

[#950] Outage NL-ix (Internet Exchange) (Status)

Posted: 2021-02-04 13:53

Start: 2021-02-04 13:15:00
End : 2021-02-04 13:40:00

Affects: Traffic via NL-ix

We have noticed a big outage on one of the dutch Internet Exchanges named NL-ix.

On their traffic graphs it is clearly visible an outage is visible https://www.nl-ix.net/network/traffic-stats/

The issue started at 13:15 and is resolved around 13:40.

We will be monitor the situation and if the issue returns, we will temporary disable our sessions with NL-ix.

Update sent by the NLix 15:07:
Investigation about cause is still ongoing, but is seems this is related to the general issue going on in Nikhef.

[#949] High latency towards Global Connect (AS2116) (Adjustments)

Posted: 2021-02-02 10:25

Start: 2021-02-02 10:22:00
End : 2021-02-02 10:22:00

Affects: Traffic Global Connect

We have received reports of high latency towards Global Connect (AS2116)

We have preferred the path over NTT in the routing tables.

The previous best selected path was over Lumen

[#948] Wave service maintenance LINX EQX-AM7 (Maintenance)

Posted: 2021-01-06 10:26

Start: 2021-01-06 23:00:00
End : 2021-01-07 05:00:00

Affects: Traffic LINX EQX-AM7

Wave service provider which connects Equinix AM7 (EQX-AM7) towards LINX is having a planned maintenance.

Their maintenance reason is "Maintenance on our Fibre infrastructure platform related to Cable Splicing."

GTT has given this maintenance reference number: 5639371

All LINX routed traffic is automatically routed over our other exchanges and IP transits.

For transparency purposes we decided to publish this notification.

[#947] Darkfiber maintenance NZS - EQX-AM7 (Maintenance)

Posted: 2021-01-04 11:51

Start: 2021-01-29 00:00:00
End : 2021-01-29 06:00:00

Affects: Traffic between NZS - EQX-AM7

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between NZS and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

Customers having their own waves from NZS to EQX-AM7 will experience an outage during this time.

[#946] Wave service maintenance FranceIX GSA (Maintenance)

Posted: 2020-12-16 08:48

Start: 2021-01-05 23:00:00
End : 2021-01-06 05:00:00

Affects: Traffic FranceIX GSA

Wave service provider which connects Global Switch Amsterdam (GSA) towards FranceIX is having a planned maintenance.

Their maintenance reason is "Maintenance on our Transmission platform related to Hardware Replacement."

GTT has given this maintenance reference number: 5669155

All FranceIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparency purposes we decided to publish this notification.

[#945] Wave service issue DECIX EQX-AM7 (Status)

Posted: 2020-12-08 14:38

Start: 2020-12-08 09:00:00
End : 2020-12-08 20:30:00

Affects: Traffic DECIX EQX-AM7

Around 9:00 we've noticed our connection from EQX-AM7 towards DECIX went down.

Wave service provider which connects Equinix AM7 (EQX-AM7) towards DECIX is investigating the issue.

Update 14:00: The 3rd Party Engineerīs performed OTDR measurements and showing an interruption between Cologne and Dusseldorf.
Fibre team on the way to break location , no ETA/ETR yet.

Final update from our wave service provider: After concluding our investigation, the GTT optical NOC can report back that we observed outage on our DWDM backhaul between Dusseldorf and Linz through which your circuit traverses. Further case was opened with the underlying fibre provider.Our partner immediately started their investigation and dispatched techs to Dusseldorf to troubleshoot the fiber. They performed OTDR test and identified possible fiber cut. Our partner dispatched the engineer and found a fiber cut due to construction work at Bergisch Gladbacher Str., 51063 Cologne. Our partner splices the fiber which restored the services. Restoration work delay due to our partner need to installed approximately 400ft of new cable with 6 Tubes needs to spliced to restore connectivity. Our partner confirmed this is permanent fixed.

[#944] Darkfiber maintenance NZS - GSA (Maintenance)

Posted: 2020-11-02 08:57

Start: 2020-11-19 23:00:00
End : 2020-11-20 07:00:00

Affects: Redundancy Nedzone, wave customers

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Globalswitch Amsterdam (GSA) is having a planned maintenance. The connectivity between NZS and GSA will automatically reroute over our EQX-AM7 darkfiber during this maintenance.

Customers having their own waves from NZS to GSA will experience an outage during this time.

Update 2020-11-13:
This maintenance has been postponed by Eurofiber for now, once a new date is available we'll create a new post.

[#943] Darkfiber maintenance NZS - GSA (Maintenance)

Posted: 2020-11-02 08:55

Start: 2020-11-19 23:00:00
End : 2020-11-20 07:00:00

Affects: Redundancy Nedzone, wave customers

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Globalswitch Amsterdam (GSA) is having a planned maintenance. The connectivity between NZS and GSA will automatically reroute over our EQX-AM7 darkfiber during this maintenance.

Customers having their own waves from NZS to GSA will experience an outage during this time.

[#942] Darkfiber maintenance NZS - EQX-AM7 (Maintenance)

Posted: 2020-10-14 10:00

Start: 2020-11-06 00:00:00
End : 2020-11-06 06:00:00

Affects: Redundancy Nedzone, wave customers

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between NZS and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

Customers having their own waves from NZS to EQX-AM7 will experience an outage during this time.

[#941] Wave service maintenance DECIX GSA (Maintenance)

Posted: 2020-10-09 08:36

Start: 2020-10-18 00:00:00
End : 2020-10-18 06:00:00

Affects: Traffic DECIX GSA

Wave service provider which connects Global Switch Amsterdam (GSA) towards DECIX is having a planned maintenance.

Their maintenance reason is "Fiber optic works needs to take place in our network. "

GTT has given this maintenance reference number: 5494823

All DECIX routed traffic is automatically routed over our other exchanges and IP transits.

For transparency purposes we decided to publish this notification

[#940] Slow speeds towards Vodafone Germany (AS3209) (Adjustments)

Posted: 2020-10-06 16:18

Start: 2020-10-06 16:15:00
End :

Affects: Traffic Vodafone Germany

We have received reports of slow speeds towards Vodafone Germany (AS3209)

We have preferred the path over Liberty Global in the routing tables.

The previous best selected path was over Level3.

[#939] Darkfiber maintenance DBC - EQX-AM7 (Maintenance)

Posted: 2020-10-02 14:00

Start: 2020-10-15 23:00:00
End : 2020-10-16 06:00:00

Affects: Redundancy Databarn Capelle

Darkfiber provider EUNetworks which connects Databarn Capelle (DBC) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between DBC and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

[#938] Issues DBC (Status)

Posted: 2020-09-21 11:03

Start: 2020-09-21 10:20:00
End : 2020-09-21 16:15:00

Affects: Traffic DBC

At 10.20 we had one of the ring switches in DBC reload itself, where we have 2 switches running in a stack. The moment the second switch came back online we started seeing packetloss throughout DBC.

For now we disabled all ports of the second switch while we investigate into this matter, currently seeing a steady ping again.

Update 12:20:
Half of the ports have been enabled again, after resetting the bridges on the VPLS ring.

Update 16:15:
The other half has been enabled now as well.

We've made a case at the vendor to debug this in more detail.

[#937] Emergency reload R1 EQX-AM7 (Status)

Posted: 2020-09-18 13:17

Start: 2020-09-18 13:20:00
End : 2020-09-18 15:30:00

Affects: FTTB/FTTH Customer, IP transit Customers

Due to the on-going issues following in post #936 we are going to do an emergency reload of our router in EQX-AM7.

Internet Customers will experience a loss of internet as well as customers having only one BGP session. Customers having a redundant BGP session with us should not experience any issues.

We are aware it is the middle of the day for most but due to the on-going issues there is little choice left for us to do it at a very short timespan, we will try our best to keep the downtime to an absolute minimum.

Update 15:30:
The router has been reload and is back online.

[#936] Internet issues Fiber to the business (Status)

Posted: 2020-09-18 09:53

Start: 2020-09-18 09:00:00
End : 2020-09-18 17:22:00

Affects: Customer with FTTB Internet

We've became aware of multiple issues affecting our FTTB customers and are currently investigating.

Once we have an update it will be posted here.

Update 15:42:
After the reload of our router in the previous ticket we see most of the issues resolved, a few remain and we'll continu on those.

Update 17:22:
The last few we had now also report it to be working again.

In case you still notice any issues please let us know.

[#935] Outage darkfiber Databarn Capelle (DBC) - Globalswitch Amsterdam (GSA) (Status)

Posted: 2020-09-16 14:19

Start: 2020-09-16 13:50:00
End : 2020-09-17 08:30:00

Affects: Redundancy DBC

Currently we are experiencing an outage on the darkfiber between GSA and DBC, carrying 400gbps of connectivity. All traffic is diverted over our darkfiber between DBC and Equinix AM7.

At this moment we are inspecting the issue and keep this post updated with our findings.

Update 15:20 CEST:
We've been notified by our darkfiber party that there is currently a darkfiber break on the path between Rotterdam and Amsterdam.

Update 2020-09-17 21:30 CEST:
Currently our darkfiber is back online but we are currently awaiting a crew hands-off from our darkfiber party before closing up this case.

[#934] Maintenance Transit Liberty Global (Maintenance)

Posted: 2020-09-09 14:15

Start: 2020-09-16 00:01:00
End : 2020-09-16 00:31:00

Affects: Traffic towards Liberty Global

We have received a notification of a maintenance by one of our transit providers, Liberty Global.

During their maintenance all Liberty Global traffic will be automatically rerouted over our other transits and peering connections.

Their maintenance reason: Hotcut fibers in AM5 towards Liberty Global.

For transparency purposes we decided to publish this notification.

[#933] Darkfiber maintenance NZS - EQX-AM7 (Maintenance)

Posted: 2020-09-08 10:05

Start: 2020-09-17 00:00:00
End : 2020-09-17 06:00:00

Affects: Redundancy Nedzone, wave customers

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between NZS and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

Customers having their own waves from NZS to EQX-AM7 will experience an outage during this time.

[#932] Outtage ForeFreedom Network (Status)

Posted: 2020-09-01 09:41

Start: 2020-09-01 09:20:00
End : 2020-09-01 09:40:00

Affects: FTTH/FTTB Customers

We have been in contact with ForeFreedom about a possible network outage within their platform.

As we have investigated together, we have noticed that everything at our side is running optimal but may be resident in their platform.

The engineers of ForeFreedom is currently looking into this issue at this moment.

Once we received new updates from ForeFreedom, we will update this post accordingly.

Please note that this outage is only for Fiber to the Home/Business networks connected through ForeFreedom and does not affect datacenter, transit connectivity and hosting or other Fiber to the Home/Business connectivity.

Update 2020-09-01 10:35:
We have received an update from ForeFreedom, ForeFreedom rerouted the traffic as there is a major outage on the darkfiber.

[#931] Packetloss/connectivity issues DBC (Databarn Capelle) (Status)

Posted: 2020-08-27 20:14

Start: 2020-08-27 19:20:00
End : 2020-08-27 19:40:00

Affects: Hosting DBC (Databarn Capelle)

At 19:25 we have received notifications from our monitoring system about a minor outage on connectivity at DBC.

After investigation we have found one of the optics reported fault causing several VLANS to flap between VRRP primary to VRRP secondary which caused some connectivity issues.

The offending port is shutdown at 19:30 and VRRP completely restored at 19:40.

Currently we will be monitoring the situation and we will be planning in to replace the defective optic.

[#930] DDoS attack towards internet customers/IP transit customers (Status)

Posted: 2020-08-21 18:26

Start: 2020-08-21 17:58:00
End : 2020-08-21 18:22:00

Affects: Internet and IP transit customers

We have received yet another DDoS attack towards a IP transit customer.

The DDoS was 100+ GBPS in size and we have taken steps to the IP address that is being attacked.
We will also be taking additional steps.

All systems should now run normal again.

We are sorry for the inconvenience.

[#929] DDoS attack towards internet customers/IP transit customers (Status)

Posted: 2020-08-20 19:35

Start: 2020-08-20 18:51:00
End : 2020-08-20 19:19:00

Affects: Internet and IP transit customers

We have received a DDoS attack towards a IP transit customer.

The DDoS was 100+ GBPS in size and we have taken steps to the IP address that is being attacked.

All systems should now run normal again.

We are sorry for the inconvenience.

[#928] Outage DBC (Status)

Posted: 2020-08-12 01:37

Start: 2020-08-12 00:15:00
End : 2020-08-12 01:37:00

Affects: Network DBC

We experienced an issue with the feed row above one of our core racks hitting both of our core switches causing a complete outage of our core switches.

Currently with the help of engineers we've brought them back online and will now look at individual issues still happening.

[#927] Darkfiber maintenance NZS - EQX-AM7 (Maintenance)

Posted: 2020-08-10 09:54

Start: 2020-09-08 00:00:00
End : 2020-09-08 06:00:00

Affects: Redundancy Nedzone, wave customers

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between NZS and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

Customers having their own waves from NZS to EQX-AM7 will experience an outage during this time.

[#926] Darkfiber maintenance NZS - EQX-AM7 (Maintenance)

Posted: 2020-08-10 09:51

Start: 2020-09-01 00:00:00
End : 2020-09-01 06:00:00

Affects: Redundancy Nedzone, wave customers

Darkfiber provider Eurofiber which connects Nedzone Steenbergen (NZS) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between NZS and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

Customers having their own waves from NZS to EQX-AM7 will experience an outage during this time.

[#925] Darkfiber maintenance DBA - EQX-AM7 (Maintenance)

Posted: 2020-08-08 01:05

Start: 2020-08-20 00:00:00
End : 2020-08-20 06:00:00

Affects: Redundancy Databarn Amsterdam

Darkfiber provider GTT which connects Databarn Amsterdam (DBA) towards Equinix AM7 (EQX-AM7) is having a planned maintenance. The connectivity between DBA and EQX-AM7 will automatically reroute over our GSA darkfiber during this maintenance.

[#924] Maintenance Transit Centurylink (Maintenance)

Posted: 2020-08-08 00:50

Start: 2020-08-21 02:00:00
End : 2020-08-21 04:00:00

Affects: Traffic towards Centurylink

We have received a notification of a maintenance by one of our transit providers, Centurylink.

During their maintenance all Centurylink traffic will be automatically rerouted over our other transits and peering connections.

Their maintenance reason: The nature of this work is to perform a Network Element software upgrade.

Their reference number: 19350931

For transparency purposes we decided to publish this notification.

[#923] Onapp Hypervisor issues (Status)

Posted: 2020-08-06 21:54

Start: 2020-08-06 20:30:00
End :

Affects: Customer located on our Onapp platform

One of our hypervisors crashed at 20:30 which has been solved and the hypervisor is back online for some time, currently we are running a disk repair over the cluster, which is needed for vpsses to being able to boot, if yours currently is not able to boot this is because it's disk hasn't been fully synched yet.

Once that has been done it should boot up again, currently it's almost at 50%.

Update 22:46:
Disk repair is currently at 60%

Update 23:04:
Disk repair is currently at 70%

Update 23:18:
Disk repair is currently at 80%

Update 23:29:
Disk repair is currently at 90%

Update 00:53:
Currently all disks have been repaired and synched and all vpsses should work again.

Update 2020/08/07 08:00:
Currently we're having issues on the platform and are working very hard with the Onapp support in solving these issues.

Update 15:33:

Currently all VPSses should be working again, as our vendor is still investigating into these crashes we will not close this case just yet. We will update this case when we are ready to give the all clear.

[#922] Slow speeds towards Reliance Jio Infocomm Limited (AS55836) (Adjustments)

Posted: 2020-07-28 14:27

Start: 2020-07-28 14:27:00
End : 2020-07-28 14:27:00

Affects: Traffic Reliance Jio Infocomm Limited

We have received reports of slow speeds towards Reliance Jio Infocomm Limited(AS55836)

We have denied the path over NL-IX in the routing tables.

The best selected path is now going over AMS-IX.