Informational
[#1006] Removing web access to Ticketing System
Posted: 2023-05-17 09:27Start: 2023-05-17 09:15:00
End :
Affects: ticketing.nforce.com
We wanted to update you on the removal of external access to our ticketing system.
As we have noticed that usage with our ticketing system is mostly operated via emails and with the following considerations in mind:
- Outdated Code-base: The system relies on an outdated code base that is no longer considered trustworthy.
- Unsupported Operating System: The ticketing system requires an outdated operating system that is no longer supported, leaving it vulnerable to security risks.
- Integration with Customer Portal: Our goal is to replace the ticketing system with an integrated view system for tickets within the customer portal. This streamlined approach will enhance user experience and improve ticket management efficiency.
For the reasons mentioned above we have decided to take immediate action to remove all access to the system.
History Access - While we are working to integrate a view system for tickets in our customer portal https://ssc.nforce.com if you have a specific request you may direct your request to our administration department and we may permit temporary access to view history.
[#990] Retiring NFOrce DNS resolvers
Posted: 2022-05-27 14:10Start: 2022-05-27 13:53:00
End : 2022-08-01 08:00:00
Affects: Customers still using NFOrce resolvers
At the 1st of August we will be shutting down the resolvers, any customers then still using the resolvers won't be able to resolve any outside names.
Below you can find the most popular DNS resolvers which can be used instead of ours:
Google:
8.8.8.8
8.8.4.4
Cloudflare:
1.1.1.1
1.0.0.1
Quad9:
9.9.9.9
149.112.112.112
As we do not limit which resolvers you use, you are free to pick any other you want as well.
If you have any questions or are not sure if you are using ours you can send us an email at support@nforce.com.
[#656] Updates to our Privacy Policy
Posted: 2018-05-23 16:30Start: 2018-05-25 00:00:00
End : 0000-00-00 00:00:00
Affects: N/A
The updated versions of our agreements, policies and guidelines are available at our legal page, which you can find at https://www.nforce.com/legal - will provide you with more insight over what we do with your personal data.
Your security, as well as your privacy, has always been important to us. We will continue to stay committed to protecting your data.
[#570] Route leaks from ASn 7713
Posted: 2017-12-16 12:27Start: 2017-12-16 00:00:00
End : 2017-12-16 23:59:59
Affects: Abnormal routing
We noticed that in particular routes from Aorta were being re-announced from their infrastructure, causing high latency and unstable connectivity to Aorta networks. We have received some information from other routes that were affected, most likely this is causing random global issues.
We will keep monitoring this network the coming hours.
*** Please be aware that this issue/incident is outside of our scope and is a global issue on the internet, originated by ASn 7713. ***
2017-12-16 14.50:
We have received reports from global sources that ASn 7713 still has not resolved their route leaking. Therefore we have filtered out ASn 7713 on our network from all peering exchanges.
Notice from for example NL-IX:
"We observe that AS 7713 (PT Telekomunikasi Indonesia Int) is reannouncing plenty of prefixes from their upstreams (GTT, NTT, DTAG, KPN, and so on) via the NL-IX route servers, hijacking plenty of prefixes (16 dec at 2:30 CET) toward their NL-IX port in Marseille (well, not really hijacking prefixes in fact, but stealing the traffic)."
[#509] Powergrid outage
Posted: 2017-08-14 20:05Start: 2017-08-14 14:19:00
End : 2017-08-14 15:39:00
Affects: Databarn Amsterdam
Please note that this is merely an informational notification, no incidents in our network and hosting infrastructure were detected. However, we felt the need to post news on this outage regarding the power-grid directly adjecent to the DBA DC in Amsterdam. A midstation was affected over the Leander power network at 14.19 CEST.
At this moment we can confirm that the DBA datacenter in which we have presence did not show any sign of outage due to the midstation failure. Also none of our transit providers seem to be directly affected either. Neither do we see any decrease in bandwidth throughput.
We received notifications from the DBA hosting facility that they were running on backup power since the start of the incident 14.19 CEST and have shifted back to the power grid at 15.39 CEST with no incidents reported.
DBA staff has been present during the power-grid outage to monitor the situation throughout and will continue to do so in the coming hours. They confirm no services were affected and they report everything went as it should regarding the shift to and from UPS/Aggregate when at the given times.
[#417] Major power outage North Holland
Posted: 2017-01-17 07:50Start: 2017-01-17 04:18:00
End : 2017-01-17 07:18:00
Affects: Wide spread, 3rd parties
Dear customer, please note that this is merely an informational notification. No incidents in our network and hosting infrastructure were detected. However due to the size of the outage in the Amsterdam region, customers asking questions if anything is affected, we decided to publish an update regarding this.
Datacenter infrastructure, hosting and IP transit:
Please be aware that there is a major power outage in the North of Holland which also affects the power provided to the datacenters in that region.
At this moment we can confirm that all affected datacenters in which we have presence did not show any sign of outage. Also none of our transit providers seem to be directly affected either. Neither do we see any decrease in bandwidth throughput.
We received notifications from the DBA hosting facility that they were running on backup power since the start of the incident 04.18 and have just shifted back to the power grid at 07.18.
Fiber to the business internet access lines:
We have not yet been informed that customers with Fiber Internet Access lines in those regions are affected. It is however more than logical street cabinets containing switches for those buildings were affected as well. But due to nighttime no reports were done yet.
News reference from nltimes.nl:
Over 350,000 without power in Amsterdam area; trains shut down
An early morning power outage in parts of Noord-Holland left hundreds of thousands without electricity and failures on the Dutch rail system on Tuesday. The problem started before 4:45 a.m. causing the lights to go dark in 364,000 households across Amsterdam, Zaandam, Oostzaan and Landsmeer, Dutch utility Liander confirmed on Twitter.
The cause of the outage was not revealed. Workers successfully restored power to 164,000 households by 6:30 a.m. An hour later there were about 88,000 customers without electricity, Liander said.
Meanwhile, there are no trains running in or out of Amsterdam Centraal. Train provider NS said it was not known when the train service would begin again. "This also has consequences for the rest of the country." NS warns. "Unfortunately there is still little information about the course of this disruption. We will keep you informed as much as possible. It is not known how long this situation will last."
The power outage also brought trains around Almere to a standstill. Signaling problems mean that no train traffic is possible, according to Omroep Flevoland.
A high volume of phone calls meant many had trouble registering their complaints at Liander, but the firm said they were fully at committed to getting electricity to all homes and businesses. According to Liander's website, the disruption is expected to be fixed around 9:00 a.m., though this is just an estimation. "It's hard to say how long it will take. We are now working to turn everyone on step by step", a spokesperson for the utilities company said to broadcaster NOS.
This power outage means no heating for an undetermined number of people. And with the morning temperatures being around -4 degrees in Amsterdam and surrounds, it could make for a difficult morning.
The Amsterdam police are aware of the power outage, they said on Twitter. "Only call the emergency services in case of an emergency", the police tweeted.
Reference: http://nltimes.nl/2017/01/17/350000-without-power-amsterdam-area-trains-shut
Reference: http://nltimes.nl/2017/01/17/tough-rush-hour-ahead-amsterdam-trams-subway-trains-service
Dutch news reference: http://nos.nl/artikel/2153383-grote-stroomstoring-regio-a-dam-ochtendspits-zwaar-getroffen.html
Update 10.55 by Globalswitch:
From approximately 04.00 AM to 08:30 AM this morning, the city of Amsterdam experienced a power outage. Global Switch confirms that all their Uninterrupted Power Supply systems have been working properly and there have been no incidents of power loss.
The grid company restored the power supply and we can confirm we are back in normal operation.
[#398] Liberty Global issues
Posted: 2016-11-10 12:43Start: 2016-11-10 10:55:00
End : 2016-11-10 12:00:00
Affects: Traffic going over LINX for AS6830
The LINX has disabled the BGP sessions AS37678 has with the routeservers which resolved this situation.
[#331] Australia submarine fiber cut
Posted: 2016-02-08 09:12Start: 2016-02-07 00:00:00
End : 0000-00-00 00:00:00
Affects: Australia connectivity
Please allow IP transit providers and peering partners to reroute traffic over Japan and Southern Cross.
---
Submarine cable cut lops Terabits off Australia's data bridge
| The PPC-1 cable us out of service until March ... if a ship to fix it can be found |
Another of the submarine cables connecting Australia to the world, for data, has broken.
PPC-1, which stretches from Sydney to Guam and has 1.92 terabits per second capacity, is out of service until at least March 7.
TPG's announcement says the fault is around 4,590 km from the cable's Guam landing, which means it's around 3,000 metres below the surface.
The fault notice says engineers first logged a report that "alarms indicated that a submarine line card had lost its payload", and the company is trying to establish when a repair ship can be dispatched to the location.
In the meantime, traffic is using alternate routes including the Australia-Japan Cable and Southern Cross.
Last year, the SeaMeWe-3 cable which runs from Perth to Asia via Indonesia suffered multiple outages.
The situation is complicated by the Basslink cable outage. As Vulture South reported last week, a repairing the electrical cable connecting Tasmania to the mainland is going to necessitate a visit by cable repair ship the Ile de Re, because Basslink's communication fibre is going to be cut during the operation.
The Ile de Re would be the default repair ship for PPC-1, so there's likely to be a lot of messages flying around working out whether it can fit a trip to Guam into its schedule, or if another ship has to be called in.
The cut also represents a challenge to PPC-1's owner, TPG, as the telco and ISP has recently offered vastly increased download allowances for its customers. That's the kind of thing an integrated carrier that owns a submarine cable can do. TPG's investors will be hoping its also invested in lots of local caching and contracts for backup bandwidth, as if it hasn't the cost of landing data promised to users will soon stack up.
News reference: http://www.theregister.co.uk/2016/02/07/cable_cut_lops_terabits_off_australias_net_connectivity/
[#296] BHARTI Airtel Ltd. internet hijack
Posted: 2015-11-07 10:06Start: 2015-11-06 05:52:00
End : 2015-11-06 14:40:00
Affects: World wide, mostly India region
Please be aware that this was a world wide issue caused by Airtel Ltd. and not an NFOrce specific issue. In fact as we (NFOrce) are widely directly connected with hundreds of network, even if they would leak/announce any of our prefixes its affect would most likely be limited to BHARTI Airtel Ltd. customers only. However we are sending this notification as we can imagine you noticed issues in general on the internet yesterday without knowing what was going on.
We can however conclude this was most likely "just" a misconfiguration, as they announced exactly the same prefixes as originally announced by the legitimate providers. If they wanted to hijack specific networks on purpose they would announce their prefixes as "more specifics" (smaller prefixes that have priority in BGP routing). Next to that they would not
Please see a list of most impacted networks below ( source: http://www.bgpmon.net/large-scale-bgp-hijack-out-of-india/ ):
AS20940 & AS16625 & AS35994- Akamai International,
AS7545 – TPG Telecom Limited,
AS8402 – OJSC Vimpelcom,
AS39891 – Saudi Telecom Company JSC,
AS45528 – Tikona Digital Networks Pvt Lt,
AS24378 – Total Access Communication PLC
AS4755 – TATA Communications
AS7552 – Viettel Corporation
AS9605 – NTT DOCOMO, INC.
AS2914 – NTT America, INC.
AS3257 – GTT
AS714 – Apple Inc
[#287] Hurricane Joaquin
Posted: 2015-10-02 12:49Start: 2015-10-05 00:00:00
End : 2015-10-08 00:00:00
Affects: n/a
Please see below the official statement from Zayo/Abovenet.
As you may know, Hurricane Joaquin is currently on a trajectory to make landfall in the Northeastern U.S. between Monday evening and Tuesday afternoon. Zayo is taking all necessary actions to prepare and protect our network and facilities in the event of storm-related impacts. The hurricane has currently reached Category 4 status with sustained winds of 130 miles per hour and hurricane force winds extending outward up to 45 miles from the center of the hurricane. According to some models, it is expected to make landfall in the northeast region of the country (between North Carolina and Massachusetts depending on the final trajectory) and is expected to bring heavy rains and localized flooding to Eastern US states, which have already experienced significant rainfall this week.
As Zayo’s network includes aerial and buried cable throughout the northeast region, the greatest threats are fiber cuts and potential widespread power outages in the impact zones. All of our network locations in the region have generator backup for AC power redundancy, complemented by DC battery backup to ensure ongoing operations in the event of commercial power failures. In anticipation of commercial power outages and other potential network impact, Zayo has taken all necessary precautions in preparation for this storm, including:
• Confirming that all sites along the network route have generators that are in proper working order and filled with fuel
• Preparing portable generators for transport (i.e. fuel, batteries, testing), and acquiring additional portable generators in potential impact areas.
• Confirming that all of our generator repair contractors and fuel supply agencies are on standby and able to assist us as needed
• Having our field technicians prepare their trucks for emergency response- loaded with tools, consumables, emergency supplies, topped off with fuel, and additional fuel available for portable generators
• Field technicians obtaining cash for emergency purchases in the event that Point of Sale systems are not functioning
• Reviewing resource availability for our outside resources (field, outside plant), and temporarily relocating resources as needed to ensure maximum response time
• Preparing access credentials / materials for Zayo resources to enter areas declared to be a state of emergency
• Preparing our smaller portable generators (e.g. 5500 watts) for deployment in our Fiber to the Tower network
• Reviewing inventory of spare / replacement fiber cable and associated hardware for hanging / lashing, engaging fiber restoration contractors to ensure appropriate resources, test and repair equipment are available to respond to multiple events in parallel if needed
• Identifying all planned maintenance activities through the weekend and into next week with preparations to cancel all non-critical maintenance activities should the hurricane present a threat to the network in any area
• Reviewing all network layers (DWDM, SONET, Ethernet, IP, FTT) to ensure there are no network devices with standing impairments (i.e. loss of redundancy, intermittent errors) and ensuring the network is clean and green in all potential impact areas
• Reaching out to all of our underlying fiber providers to ensure that they have made necessary preparations, are prepared to respond, and are prepared to cancel on non-critical planned maintenance activities
• Augmenting NCC staffing levels throughout the weekend and into next week in anticipation of higher call volumes and network activities
• Creation of customer impact lists for major routes and facilities in the area, allowing us to immediately communicate with customers if there is network impact, and open trouble tickets for all affected services via automated tools
At this point, we do not anticipate service impact related to this storm though we will continue to monitor the situation closely and maintain our preparations with increased staffing levels. The Zayo operations centers will be available 24/7 throughout this event and reachable at ***. Additionally, the current Zayo escalation lists are attached and the operational management team will also be immediately reachable at all times for any customer questions or concerns. Zayo will continue to provide further updates on our preparations and network operations until the threat of this storm has passed via our Twitter feed at *** and through additional direct communications as necessary. Please feel free to contact us with any specific questions or concerns that you may have.
[#271] Bug causing Brocade routers to crash
Posted: 2015-08-14 10:34Start: 2015-08-14 10:34:14
End : 2015-08-14 10:34:14
Affects: Wolrd-wide, Brocade MLXe/XMR
During June 2015 our NOC team at NFOrce discovered a bug that affects all Brocade MLXe/XMR routers which are running the latest firmware releases. This bug causes these routers to instantly crash and reload, continuously.
We have been debugging this issue further and we can now conclude that this bug is present in the 5800b release (rls date 2015-05-22) and 5700d (rls date 2015-07-13). We are reasonable sure that versions 5600f, 5600fb, 5600b and 5600ff (rls date 2015-07-15) are also affected and thereby vulnerable. This means that initially only 5800 tree was affected and that since July this year the 5600 and 5700 trees became affected and vulnerable as well.
These crashes are caused by specific configurations set in the announcements done by remote BGP peers. These configuration settings are most likely set by mistake or by providers who do not clean up their network announcements before (re)announcing them to their peers.
As this now affects all the latest versions in the mostly used trees, we expect that this will cause a more global issue in the coming weeks when providers are upgrading their routers to these latest firmware releases.
For security reasons we will not publicly publish how to replicate this bug, especially as this bug causes instant crashes and reloads and can be caused by any remote BGP peer. However if you are affected by this bug you can contact our NOC and we are more than happy to help you resolve this.
Please note that we are and were never at risk/affected as we are filtering out the affected settings before having our routers install these routes. We however would like to warn everyone to not upgrade to these firmware versions and wait for the future releases.
[#262] Transit provider maintenance
Posted: 2015-06-26 19:42Start: 2015-06-27 06:00:00
End : 2015-06-27 10:00:00
Affects: Routing GTT
During their maintenance traffic will automatically reroute over the transits providers. We do not expect any issues.
Their maintenance reason is "Emergency planned work to perform critical upgrade on core device in Amsterdam. We apologize in advance for any inconvenience this might cause.".
For transparancy purposes we decided to publish notification.
[#244] Wave service maintenance
Posted: 2015-04-18 10:27Start: 2015-05-18 23:00:00
End : 2015-05-19 07:00:00
Affects: Routing DE-CIX
Since March 2015 we have setup additional connectivity towards our exchanges such as the DE-CIX over other wave service providers. This due to the many maintenances lately on paths towars the DE-CIX.
Therefore during this maintenance traffic will now automatically reroute over our other DE-CIX connection and other peering exchanges or transit providers. We do not expect any issues.
Notification from KPN International:
Fiber maintenance by our partner carrier in Germany
Expected outage duration: 420 minutes
[#236] Major power outage North Holland
Posted: 2015-03-27 10:30Start: 2015-03-27 09:30:00
End : 2015-03-27 11:00:00
Affects: Wide spread, 3rd parties
Datacenter infrastructure, hosting and IP transit:
Please be aware that there is a major power outage in the North of Holland which also affects the power provided to the datacenters in that region.
At this moment we can confirm that all affected datacenters in which we have presence did not show any sign of outage. Also none of our transit providers seem to be directly affected either. Neither do we see any decrease in bandwidth throughput.
Fiber to the business internet access lines:
We have however been informed that customers with Fiber Internet Access lines in those regions are affected. This as the street cabinets containing switches for those buildings are without power at this moment. This however is up to the fiber infrastructure providers to resolve and not within our care. If this was in our care we would most certainly placed UPS systems with each street cabinet. Most likely this will be resolved only after the power comes back up.
News reference from RT.com:
Large areas of the Netherlands have been left without electricity. The north of the country has been mainly affected, with the Dutch electricity network operator saying the outage has been caused by a power grid overload.
The capital Amsterdam has been affected, along with the area around Schiphol Airport. Twitter has reported that some hospitals in Amsterdam have been left without power, while the tram and metro networks are also not running in the capital. The Dutch electricity network operator TenneT says on its website that the outage has been caused by the power grid becoming overloaded. A spokeswoman for the airport said it did suffer a temporary outage, but is now running on backup power, the ANP news agency reports.
Twitter has reported that some hospitals in Amsterdam have been left without power, while the tram and metro networks are also not running in the capital. Thousands of people have been stuck in trains and trams, as well as on the subway because the doors will not open.
Reference: http://rt.com/news/244537-power-outage-north-holland/
Update 11.00:
In the past 15 minutes we start receiving reports of power being restored from multiple sources. We therefore expect that either the outage is resolved or power is rerouted and the affected areas are narrowed down as much as possible.
[#223] Winter storm Juno
Posted: 2015-01-27 09:11Start: 2015-01-27 00:00:00
End : 2015-01-29 00:00:00
Affects: N/A
As you may know, Winter Storm Juno is rapidly developing and expected to become a major snowstorm today affecting the Northeastern US, presenting blizzard conditions and creating more than 2 feet of snow in areas from eastern Pennsylvania to New England, with widespread snowfall across the Northeast. The National Weather Service has issued blizzard warnings in advance of the storm in anticipation of potential historic snowfall amounts and damaging wind gusts, with significant potential for widespread power outages and extremely limited travel ability throughout the region.
Key Points of the storm:
• Moderate to heavy snow likely from portions of the coastal Mid-Atlantic (New Jersey, far eastern Pennsylvania) to New England.
• Peak impacts late Monday through Tuesday night.
• Widespread accumulations of 1 to 2 feet likely with some areas picking up over 2 feet. Snow drifts will be even higher.
• Blizzard or near-blizzard conditions will make travel dangerous and, in some areas, impossible.
• Over 3,500 U.S. flights cancelled or rescheduled
From a network perspective, this storm presents the risk of network impact due to widespread and extended power outages, coupled with the potential of fiber cuts due to falling trees and ice loading where Zayo has aerial cable facilities (primarily in the rural areas of Pennsylvania). In preparation for this storm, Zayo has initiated a Hazcon 3 condition and completed the following preparations:
• Confirming that all sites within the potentially affected area have generators that are in proper working order and filled with fuel
• Ensuring that refueling agencies are available to provide ongoing generator refueling in the event of extended power outages
• Acquiring additional temporary generators as backup power sources in the event of primary generator failures
• Fueling of all field and OSP technician vehicles including having cash on hand for emergency purchases and supplies (food, warm clothing, and snow removal equipment) for the inclement weather.
• Reaching out to vendors, contractors, and colocation providers throughout the region to confirm their state of readiness
• Cancellation of all planned maintenance activities within the area or elsewhere in the network that may result in impact to diverse or protected customer services if the network is impacted by the sinter storm; all maintenances are being canceled through Thursday and with additional cancellations likely depending on the conditions created by the storm.
• Engagement of local fiber repair contractors to review emergency dispatch and restoration procedures, confirming that appropriate fiber testing and repair materials (OTDR, fiber hardware, fiber reels, heavy equipment for transport and excavation) are on hand
• Adjustment of NCC Staffing to provide additional resources to handle increased call volumes and assist customers during the period of potential storm impact
• Creation of customer impact lists for major routes and facilities in the area, allowing us to immediately communicate with customers if there is network impact, and open trouble tickets for all affected services via automated tools
At this point, we do not anticipate service impact related to this storm though we will continue to monitor the situation closely and maintain our preparations with increased staffing levels and heightened awareness of our network facilities in the vicinity of the affected areas. The Zayo operational centers will be available 24/7 throughout this event. Additionally, the current Zayo escalation lists are attached and the operational management team will also be immediately reachable at all times for any customer questions or concerns. Zayo will continue to provide further updates on our preparations and network operations until the threat of this storm has passed. Please feel free to contact us with any specific questions or concerns that you may have.
[#188] Global transatlantic fiber cut
Posted: 2014-11-28 11:08Start: 2014-11-19 22:30:00
End : 2014-12-01 16:30:00
Affects: Routing capacity EU-US
Update 2014-11-23 23:00
The repair ship has located the submarine cable and have begun testing to isolate the location of the break on the fibre. There is no ETR at this time.
Update 2014-11-28 10:30
COTDR testing of the initial splice from the Dublin terminal was completed in the last hour. The final splice will commence shortly after the aforementioned testing. The cable repair ship will now retrieve the buoy off side of the cable facing Southport and bring on board in order to begin the final splice process. Testing from both terminal sites will be carried out following the final splice completion notice from the cable repair ship.
Update 2014-11-28 22:00
Repair ship stuck in bad weather, work on hold until further notice.
Update 2014-12-01 16:30
The outage impacting part of our transatlantic links has been finally resolved this morning, and cabel repaired is again on the seabed. You should not experience further issues due to this. Apologies for any inconvenience caused.
[#175] Internet routing 512K limitation
Posted: 2014-08-15 17:34Start: 2014-08-12 00:00:00
End : 2014-09-01 00:00:00
Affects: The internet
Most people and companies were afraid off the issues year 2K might bring due to the possible bugs in software. Others got scared because of the major Cisco and Juniper router bugs that occurred in the past years, which brought down a large portion of the internet. And then there are some people that are always scared I guess...
Anyways, now the time has come that the internet will actually start to cripple and become unstable for a couple of weeks. This is what we have been warning many clients and other ISP about in the past years. Many have acted upon this, but we can say for sure that most have not acted yet.
In the past years everyone heard (or should have) about the depletion of the IPv4 address space. Registries such as RIPE, who distribute those address blocks, thought it was a smart move to start and limit the size of each subnet that they distribute to their members who request additional address space. This has resulted in many subnet sizes of a /22 (1024 IP addresses) up to a /24 (256 IP addresses) to be distributed. Providers need to use these subnets across their infrastructure and as many providers have their infrastructure widely spread, this required them to divide those subnets into smaller portions, blocks. This is normal behavior, however as the subnets they received were already very small, there was not much to divide. Providers had to start dividing them into the smallest blocks possible that is allowed to be routed on the internet, which are /24's (256 addresses).
Up till a few years ago, there were about 400K routes active on the internet (see reference 1). Since last Tuesday we have reached over 512K routes. This poses a serious problem all over the internet as many routers have hardware limit of 512K routes that can be installed in their forwarding table of their line-cards. Every brand and type of router will act differently when this limit is hit. We have seen Cisco routers crash/reload (continuously). And other routers just not installing new routes, causing unavailability of parts of the internet. Last Tuesday we have been given a preview of these issues. Major providers like Level 3, AT&T, Cogent, Sprint and Verizon were having serious issues. And surely there were many more that should be added to this list.
The crashes and other issues will be seen in the coming weeks all over the internet. We expect that from time to time there will be, as we call it, BGP flaps all over the internet. This will cause routes to changes back and forth for a few minutes or even hours, which on its turn could cause a domino effect or at least a non-optimal set of routes on the internet.
It could be that many people will not notice this. But if you can not reach your local bank website, your company email or you just cannot reach your girlfriend by phone... Think about what we have said. And do not blame your local ISP -just yet-, as it can be very much any of the other 70.000 Internet Service Providers out there. Anyone provider that has been given an AS (Autonomous System) number and is between you and your destination (and/or back) can be causing these issues. Just wait a minute and retry.
We can now officially call the 12th of August 2014, the "512k day" (reference 5).
References:
1. CIDR Active BGP entries (FIB): http://www.cidr-report.org/cgi-bin/plota?file=/var/data/bgp/as2.0/bgp-active.txt&descr=Active BGP entries (FIB)&ylabel=Active BGP entries (FIB)&with=step
2. http://online.wsj.com/articles/y2k-meets-512k-as-internet-limit-approaches-1407937617
3. http://www.renesys.com/2014/08/internet-512k-global-routes/
4. http://arstechnica.com/security/2014/08/internet-routers-hitting-512k-limit-some-become-unreliable/
5. http://en.wikipedia.org/wiki/512K
[#157] DE-CIX peering exchange
Posted: 2014-04-29 13:22Start: 2014-04-29 13:00:00
End : 2014-04-29 13:00:00
Affects: Additional routes
In the next weeks we will monitor merely peer with the so called route servers at the DE-CIX. During mid/end of May we will start setting up peering sessions.
[#153] INDOSAT-INP-AP internet hijack
Posted: 2014-04-03 08:48Start: 2014-04-02 20:00:00
End : 2014-04-02 23:00:00
Affects: World wide, mostly Thailand region
Please be aware that this was a world wide issue caused by INDOSAT and not an NFOrce specific issue. In fact about 400.000 thousand routes were affected. This is nearly the whole internet, as the grand total of available internet routes are ~490.000 at this moment.
We can however conclude this was most likely "just" a misconfiguration, as they announced exactly the same prefixes as originally announced by the legitimate providers. If they wanted to hijack specific networks on purpose they would announce their prefixes as "more specifics" (smaller prefixes that have priority in BGP routing).
We received hijack reports from the follow network monitoring sources:
#1 AS4761 (INDOSAT-INP-AP INDOSAT Internet Network Provider,ID)
#2 AS4651 (THAI-GATEWAY The Communications Authority of Thailand(CAT),TH)
#3 AS38794 (BB-Broadband Co., Ltd. Transit AS)
#4 AS18356 (AWARE-AS-AP)
#1 = the network that caused all this.
#2 = the network that bluntly accepted their mistake.
#3 & #4 = networks that reported to use the mistaken routes.
Surely there are many more, but the above is what our monitoring reported back to us.
Please see a more detailed report below ( source: http://www.bgpmon.net/hijack-event-today-by-indosat/ ):
What happened?
Indosat, AS4761, one of Indonesia's largest telecommunication networks normally originates about 300 prefixes. Starting at 18:26 UTC (April 2, 2014) AS4761 began to originate 417,038 new prefixes normally announced by other Autonomous Systems such as yours. The 'mis-origination' event by Indosat lasted for several hours affecting different prefixes at different times until approximately 21:15 UTC.
What caused this?
Given the large scale of this event we presume this is not malicious or intentional but rather the result of an operational issue. Other sources report this was the result of a maintenance window gone bad. Interestingly we documented a similar event involving Indosat in 2011, more details regarding that incident can be found here: http://www.bgpmon.net/hijack-by-as4761-indosat-a-quick-report/
Impact
The impact of this event was different per network, many of the hijacked routes were seen by several providers in Thailand. This means that it's likely that communication between these providers in Thailand (as well as Indonesia) and your prefix may have been affected.
One of the heuristics we look at to determine the global impact of an event like this is the number of probes that detected the event. In this case, out of the 400k affected prefixes, 8,182 were detected by more than 10 different probes, which means that the scope and impact of this event was larger for these prefixes.
The screenshot below is an example of a Syrian prefix that was hijacked by Indosat where the "hijacked" route was seen from Australia to the US and Canada.
Screenshot: http://www.bgpmon.net/wp-content/uploads/2014/04/Screen-Shot-2014-04-02-at-10.53.13-PM.png
[#152] Telehone connections office unreachable
Posted: 2014-03-05 13:06Start: 2014-03-05 12:30:00
End : 2014-03-05 18:00:00
Affects: Telephones office
At around 18.00 KPN engineer reported problem resolved. The issue was a defective port on their side.
[#150] Filtering rules NTP
Posted: 2014-02-26 00:00Start: 2014-02-26 00:00:00
End : 2014-02-26 00:00:00
Affects: Specific NTP source traffic
We can not disclose publicly what we have implemented. However if you are having any issues with the NTP protocol (reaching remote sources outside NFOrce's network running this protocol), please let us know and we will resolve this with you. However we have tested and it does not look like anyone is affected by this.
[#134] Statistics service missing one hour
Posted: 2013-11-06 17:22Start: 2013-11-06 16:00:00
End : 2013-11-06 17:00:00
Affects: Statistics
At 17.00 we have done a resync of the pollers local raw-data databases with the main processed-data database. We are missing 45 minutes data.
In the graphs this will show as non measured, and might look like a traffic drop. Please be assured there was no network impact, the issue is only related to the graphs themselves.
[#133] IPMI vulnerability bug
Posted: 2013-08-18 12:23Start:
End :
Affects: IPMI enabled servers
We have currently blocked UDP port 623 to prevent abuse on IPMI enabled servers.
In the next days we will doing scans and on our network and contact our customers if they are vulnerable. We will help them resolve this problem.
Reference: https://community.rapid7.com/community/metasploit/blog/2013/07/02/a-penetration-testers-guide-to-ipmi
[#88] Fibercut New York - Chicago
Posted: 2012-05-08 09:10Start:
End :
Affects: Network congestion
[#79] General network
Posted: 2012-04-20 20:37Start:
End :
Affects: DBN
If you are having any speed issues in comparisation to other ISP's in NL, please sent an email to noc@nforce.com so we can debug the situation. This would be much appreciated.
[#71] Cogent transit
Posted: 2012-02-01 00:00Start:
End :
Affects: None
We have added 20 GBIT of Cogent capacity.
[#70] Edpnet transit
Posted: 2011-12-01 00:00Start:
End :
Affects: None
We have added 20 GBIT of Edpnet capacity.
[#47] TiNET transit
Posted: 2011-08-31 23:09Start:
End :
Affects: None
We have added 40 GBIT of TiNET capacity.
[#41] Global Switch Amsterdam - fire
Posted: 2011-07-30 17:00Start:
End :
Affects: No affect
We wish to advise you of an incident at the Global Switch Amsterdam Data
Centre.
This incident did not affect your equipment on site in any way, but as
always, we would wish to keep you fully informed.
At around 08.00 CET this morning we had an electrical fault in a panel
which feeds some of the CRAC units conditioning hall 1. The fire brigade
attended and once the fault had been cleared the fire was extinguished.
There was no disruption to the power feeds to any clients load but the
power to half of the CRAC units was lost and a significant amount of
smoke was produced.
A temporary supply has been connected to the CRAC units and the hall
temperature is slowly being brought back under control. A Cleaning
company has been called in to clean any minor smoke damage created by
the fire.
This problem effected the Hall 1/ data 2 area ONLY; no other parts of
the building were impacted in any way what so ever.
[#20] Japan Quake
Posted: 2011-03-11 17:48Start:
End :
Affects: Japan
Read further information here: Renesys
[#19] Earthquake causing global routing issues
Posted: 2011-03-11 09:00Start:
End :
Affects: Multiple Transpacific Circuits
At 5:46:24 UTC according to US Geological Survey, a magnitude 8.9 earthquake
struck off the east coast of Honshu, Japan, followed by a series of aftershocks
measuring greater than magnitude 6. Given the scale of the earthquake, and the
continuing seismic activity, full impact of this event is not yet known.
This disaster has had impact on some Trans-Pacific cable system as well as some
domestic fibre infrastructure in Japan. There is a high probability that traffic
traversing Trans-Pacific will be adversely impacted during peak times. There is
also likelihood that connectivity to various destinations in Japan may be
adversely impacted as well.
As this is a developing situation, at this point there is no estimated time for
resolution. We will be issuing updates as we obtain new information and the
situation clarifies itself.
Outage Reason:
International Unplanned Outage - Multiple Transpacific Circuits
Thank you for your understanding.
[#11] Egypt leaves the internet (and returns)
Posted: 2011-01-27 23:00Start:
End :
Affects: Egypt
Read further information here: Renesys
At the 2nd of february they returned their routing services. Read here
- [#1006] 2023-05-17 » 2023-05-17 09:15 /
- [#990] 2022-05-27 » 2022-05-27 13:53 / 2022-08-01 08:00
- [#656] 2018-05-23 » 2018-05-25 00:00 / -1-11-30 00:00
- [#570] 2017-12-16 » 2017-12-16 00:00 / 2017-12-16 23:59
- [#509] 2017-08-14 » 2017-08-14 14:19 / 2017-08-14 15:39
- [#417] 2017-01-17 » 2017-01-17 04:18 / 2017-01-17 07:18
- [#398] 2016-11-10 » 2016-11-10 10:55 / 2016-11-10 12:00
- [#331] 2016-02-08 » 2016-02-07 00:00 / -1-11-30 00:00
- [#296] 2015-11-07 » 2015-11-06 05:52 / 2015-11-06 14:40
- [#287] 2015-10-02 » 2015-10-05 00:00 / 2015-10-08 00:00
- [#271] 2015-08-14 » 2015-08-14 10:34 / 2015-08-14 10:34
- [#262] 2015-06-26 » 2015-06-27 06:00 / 2015-06-27 10:00
- [#244] 2015-04-18 » 2015-05-18 23:00 / 2015-05-19 07:00
- [#236] 2015-03-27 » 2015-03-27 09:30 / 2015-03-27 11:00
- [#223] 2015-01-27 » 2015-01-27 00:00 / 2015-01-29 00:00
- [#188] 2014-11-28 » 2014-11-19 22:30 / 2014-12-01 16:30
- [#175] 2014-08-15 » 2014-08-12 00:00 / 2014-09-01 00:00
- [#157] 2014-04-29 » 2014-04-29 13:00 / 2014-04-29 13:00
- [#153] 2014-04-03 » 2014-04-02 20:00 / 2014-04-02 23:00
- [#152] 2014-03-05 » 2014-03-05 12:30 / 2014-03-05 18:00
- [#150] 2014-02-26 » 2014-02-26 00:00 / 2014-02-26 00:00
- [#134] 2013-11-06 » 2013-11-06 16:00 / 2013-11-06 17:00
- [#133] 2013-08-18
- [#88] 2012-05-08
- [#79] 2012-04-20
- [#71] 2012-02-01
- [#70] 2011-12-01
- [#47] 2011-08-31
- [#41] 2011-07-30
- [#20] 2011-03-11
- [#19] 2011-03-11
- [#11] 2011-01-27