Live Network Status & Maintenance Information

View this page to check the status of any issues that may affect multiple clients or systems within our network. Below, you'll find the latest details on our current network status and any planned system maintenance or upgrades. Click for an RSS feed of our below notices.

If you experience service issues other than those listed below, please go to your Client Portal and submit a ticket or call our 24/7 support team.

Current Issues & Planned Maintenance

2022-03-11 :: Plesk Licensing
Each year, Plesk raises licensing fees. We have kept our monthly Plesk licensing fees static over the years to this point. If you have Plesk licensing from us you will see the following changes in your next renewal period. Plesk 10 Domains $29/mo Plesk 30 Domains $39/mo Plesk Unlimited Domains $59/mo Thank you for choosing Netsonic!
2022-01-03 :: 2:00 PM CST DNS Issue Resolved
The DNS issues with ns1 and have been corrected. It looks like a backend resolver was the issue with the name servers resolving local domains that are using ns1 and ns2. If you see any further issues, please submit a ticket and we will investigate.
2022-01-03 :: 1:25 PM CST DNS Issue
We are working an issue on the ns1 and name servers that is causing latency in resolution.
2021-12-15 :: All Systems Normalized
7:00 AM CST All systems have been stable and normalized for some time now. Ww believe this whole mess was a result of being a by product of the log j4 exploit that hit zero day last Friday night. While the cloud platforms, clusters and host machines are not susceptible, there can be down wind effects which we are looking further into. You will want to check your system(s) for possible vulnerabilities, apply patches and keep your system(s) updated as much as practical. If you are in need of managed services to assist with this, simply submit a support request ticket. Here are links to more info.
2021-12-14 :: DNS ns1 and
5:00 PM CST - We have all DNS services running at this time. We will be looking to additional hardening options going froward to enhance resiliency and performance. 1:00 PM CST DNS services have been mostly restored and should allow for name resolution. You may see some minor latency as we get the remaining services back online.. 11:00 AM CST We are currently working and issue related to ns1 and which is causing some resolution issues for zones using these name servers.
2021-11-19 :: All Systems Normal and Green
All systems are NORMAL and GREEN.
2020-11-20 :: 11.20.2020 - Cloud04 mail server issues
UPDATE: This issue has been resolved. We are currently working on an issue with the cloud04 mail service.
2020-04-11 :: All shared servers Plesk upgraded to 18.x
4.11.2020 - ll Plesk shared hosting servers have been upgraded to the latest Plesk Obsidian which adds enhanced security, additional features and increased efficiency.
2020-11-20 :: All Systems Normal and Green
All systems are NORMAL and GREEN.
2019-05-15 :: Network Maintenance - Wednesday May, 15 4:15AM
We will be replacing a failing linecard in a core router to remediate an issue with it resetting randomly. We apologize for the inconvenience, please bear with us as we work through this issue.
2019-05-07 :: Network Maintenance - Wednesday May, 7 4AM

On Wednesday morning we will be performing vendor recommended work on a line-card in a core router that has randomly reset itself recently. We expect a 2-3 minute outage for some segments of the environment as this reset occurs. This is the first step in the resolution and there may be another maintenance window called in the event this work does not fully remediate the issue. We apologize for any inconvenience this short window may cause.

2019-04-12 :: Network Issue - 4.12.19
At approximately 8AM CST, we began to notice packet loss on the Timewarner side of our connections. This latency lasted approximately 4-6 minutes. We are working with the upstreams to determine where the drops occurred.
2018-12-12 :: Network Latency - TimeWarner Outage
12.12.18 1:20 AM CST We are seeing reduced latency as the upstream Timewarner connectivity stabilizes. 12:45 AM CST It appears Timewarner/Spectrum is having a large outage over many geographic regions. This continues to cause high latency and connectivity issues into our network. 12:15 AM CST We are currently experiencing issues with our Timewarner connections that is causing significant latency and packet loss leading to reachability issues into our network from external locations. We are working with our upstreams to resolve.
2018-10-29 :: Cloud3 Firewall Packet Loss
10.29.18 At approximately 1:15 PM CST, we began to see packet loss on the cloud3 platform firewall. This caused some reachability issues to servers located on the cloud3 platform. We have resolved this currently and are going to be scheduling a short maintenance window to replace the firewall on this cloud.
2018-09-01 :: Unscheduled Cloud03 Maintenance

Between 10:12PM and 11:15PM CST, cloud servers on one node had to be stopped temporarily for two reboots while a hardware issue was cleared up. We apologize for the inconvenience.

At 12:41AM on Monday, cloud servers on another node had to be stopped temporarily for two reboots while a hardware issue was cleared up. We expect this to conclude the unscheduled maintenance on this cluster. We apologize for the inconvenience.

2018-08-12 :: Network Maintenance Issue - Update!

2:30AM - The extended outage to only some IP subnets is not clear as to the root cause as of yet. There appears to be a vendor bug in the codebase. More updates to come. Please bear with us as we work through this issue.

3:54AM - One network segment ( subnet) experienced an extended outage which was due to a firewall issue. We have a plan to permanently remediate this in the near future so it does not reoccur. This was an unrelated (correlated only by timing) issue to the other switching problems seen on select subnets.

There were (3) other subnets that experienced an extended outage during the maintenance window and was due to a confirmed bug in the switching codebase which rippled through other switching layers as a result and was not seen through previous testing. Confidence was high going into the change window which is why we estimated the time of the outage in the original notice very low. That being said, doing any code upgrade carries some risk which is why we schedule early on a Sunday morning to minimize the impact should the window need to be extended. The vendor was engaged during the incident and a successful workaround was implemented. 

Again, we apologize for the inconvenience of this unexpected extended maintenance window (which extended to 1hr 45 mins on the previously listed network segments) and the monitoring alerts you will likely have received in the early morning hours keeping you awake as well. We are working diligently to ensure the bug is permanently remediated and does not impact the environment in the future. 

2018-08-06 :: Network Maintenance Sunday August 12, 2:01AM

Beginning at 2:01AM on Sunday August 12, we will be conducting network maintenance which will impact traffic for up to 5 minutes as routes reconverge when egde routing/switching equipment reloads. Even though we expect traffic to be disrupted for a very short time, we understand this is a servce interruption and are conducting the maintenance at the point of lowest network utilization to minimize impact.

2018-05-23 :: Netsonic GDPR Information
Netsonic GDPR Information and how it affects you can be found here:
2018-03-28 :: Retired legacy helpdesk portal
The legacy helpdesk located at is deprecated and has been retired. Please submit support requests thru the portal as needed. This will ensure that your support tickets are maintained within the same portal as your account, service and billing details which we expect to lead to a smoother process in terms of account management.
2018-03-02 :: Network Issue Cloud3
9:50 AM CST - We found an issue with the public side firewall on the cloud3 platform causing an inability to reach the public side IPs in the subnet. We have corrected this issue and will continue to investigate further to determine exact cause of the event. I am sorry for the problems this caused you. 9:10 AM CST - We are currently investigating a network issue on one of our cloud platforms. If you have an IP in the subnet you seeing the effects of this. We are working this issue.
2018-01-07 :: Cloud Hypervisor Issue
At 4:30AM CST on cloud01 a hypervisor [HV09] was listed as offline in the cluster, however it did not terminate and migrate the VM's on it. The VM's and the hypervisor did remain online from 4:30AM to 6:55AM CST. At 6:55AM we initiated a manual failover to remediate the issue with the hypervisor and allow it to migrate the VM's and reboot as it should have done automatically. This caused (5) VM's to go offline during the reboot event. All of the affected VM's are now back online after approximately 5-10 minutes of downtime due in part to the sequencing of VM startups during these types of events. We are aware why the hypervisor did not perfom an automatig failover and remediation and will put in permanent fixes to help prevent this from reoccurring.
2018-01-05 :: Switching Path Upgrades
01.05.18 - From 8PM to 1AM CST we will be rehoming some switch connectivity as we continue updating parts of the infrastructure to support added capacity and redundancy. You may see brief interruptions (primarily packet loss) not lasting more than a few minutes as we implement these improvements and routes reconverge. 11:30PM CST - Scheduled updates were completed successfully and validated for reachability.
2017-12-15 :: Network Maintenance
12.15.17 - Next Wednesday 12.20.17 from 10PM to 4AM we will be replacing several subnet switches, aggregating ports and adding additional redundant paths for each subnet, switches and cabinets. This will allow for additional failover as well as increased capacity. You may see brief interruptions not lasting more than a few minutes as we implement these improvements.
2017-12-11 :: Network Upgrades
12.13.17 - Network upgrades have been completed. The interruption lasted lest than 2 minutes. We have more than tripled our internet connectivity capacity with this upgrade. 12.12.17 Between 10PM CST and 2AM CST we will be performing network and connectivity upgrades. During this time you may see very brief periods of reachability as we enact these upgrades.
2017-10-24 :: All Systems Green
All systems are fully functional and green at this time.
2017-05-06 :: Cloud03 Storage Work

Beginning at 10PM (CDT) on April 6 and April 7 we will be performing some disk rebalancing which will move disks within the cloud to optimize performance. There will be no downtime during this work, however there could be some slight performance slowing and backups will not be run during this work. The disk rebalancing is expected to run until the early morning the following day.

2016-12-29 :: Cloud03 Maintenance

12/29/2016 - We will be performing cloud maintenance on the HV nodes that your server resides. If your server has an ip in the subnet, you will be affected. This will require us to shutdown your VM for a short period as we conduct the backend upgrades on these nodes. Once completed, we will bring your server back on line. This maintenance window will commence at 10:00 PM CST tonight and conclude by 2:00 AM CST. The upgrades will provide new benefits to your cloud services as well as updates to enhance performance, security and stability.

Update - As of 11:55P all cloud servers are back online. Maintenance and upgrades to the hypervisor's OS and host node hardware upgrades are fully completed at this time. We appreciate your patience and continued support. Happy New Year!

2016-11-25 :: Cloud 01- Linux swap Storage Issues

7:46AM CST - We are working through an issue with the Linux swap storage device taking errors. If you have a Linux server you can use the 'swapoff -a' command to prevent it from using swap which will stop the errors if you are seeing any and later reenable it via 'swapon -a' once the issue is remedied. This will not work however if you are commonly overusing your server's RAM forcing it to use disk as memory storage; in this case it is recommended you upgrade your server's RAM allocation for highest performance as using disk for swap space is far less desirable as disk acess is slower than RAM.

2016-12-03 :: Cloud03 Software Upgrades

On Friday, December 3 beginning at 1:45AM, we will be performing an upgrade to this particular cloud's infrastructure to support newer hypervisor code in addition to updating the switching infrastructure to support additional features. This maintenance window will require all hypervisors and cloud VM's to be shutdown. Only cloud servers in the 66.180.167.* network will be affected.

2:40AM CST Maintenance is continuing; cloud hosts are rebooting now. The maintenance window will be extended to 4:30AM due to the storage resource taking longer to validate disk integrity than expected.

3:43AM CST We are performing a final network health check and will be bringing the VM's online within the next 15 minutes.

4:53AM CST All VM's have been verified over the last hour to ensure they booted successfully.

2016-10-30 :: Cloud03 Maintenance Update

12:57AM CDT - Maintenance reboots will begin at 1AM CDT.

1:30AM CDT - A hypervisor is panicking after an update. This will cause a small number of servers to be down for about 15 minutes longer than expected. Please bear with us as we bring the affected hypervisor online as quickly as possible to remedy the issue.

1:44AM CDT - All cloud virtual servers are back online.

1:57AM CDT - One other hypervisor is experiencing kernel issues affecting 5 cloud virtual servers. We are working to bring these servers online now.

2:15AM CDT - 3 cloud servers are unresponsive at this time. All other cloud servers are up and operational. We are working to bring the remaining servers online as quickly as possible. We apologize for the inconvenience.

2016-10-20 :: Cloud03 Hypervisor Upgrades

We will continue performing software upgrades on the cloud03 zone through EOM October; servers with IP addresses in the 66.180.167 subnet. We expect downtime between 10PM-4AM CDT to be les than 60 minutes as client cloud servers are rebooted to take advantage of virtualization software updates.

2016-10-01 :: Cloud01 Maintenance

On October 1 at 10PM CDT, we will be performing reboots of (3) hypervisors in the Cloud01 zone to install patches and other updates. We expect the cloud servers on those nodes to experience an outage of no more than 15 minutes. Some cloud servers that have not been rebooted in more than 180 days may go through an fsck (for Linux users) which may delay the boot time until it finishes.

2016-09-21 :: Cloud01 shared server
13:32 CST after a fsck on the disk, the shared hosting server is back online. 13:21 CST - a shared hosting server, is currently undergoing a reboot.
2016-09-16 :: Cloud 1 Maintenance, Sept 16 10PM CDT

Beetween 10PM Sept 16 and 4AM Sept 17, we will be performing an update to hypervisor node #7. We will migrate all cloud servers from that node prior to the maintenance. Some cloud servers with older OSes (approximately 6 cloud servers) will be rebooted during this maintenance. We expect the maintenance window to be less than 10 minutes in length.

We will shutdown the cloud servers that cannot be hot migrated and bring them back online within 2-3 minutes. If your cloud server has not been rebooted in more than 180 days, it will go through a normal fsck (disk check) and then come back online within 5-10 minutes.

2016-09-02 :: Cloud 1 Maintenance / Upgrades
Throughout the weekend - 9.2.16 - 9.5.16 - we will be performing maintenance on our Cloud1 platform implementing upgrades to add enhanced features and services. You may see brief periods of inaccessibility on services in this platform during this period. If your server has an IP address within the subnet of 66.180.164.xx or 216.235.70.xx, you will be affected. If your server ip's are not in those two subnets, you will be unaffected by this maintenance.
2016-08-26 :: Host Outage: Cloud01 Plesk Server

3:23PM CDT - The Plesk server experienced an OS failure. The machine has been repaired and the disk is going through a file check. Once it completes the machine should be back online. We expect it to complete within 15 minutes.

Update 3:35P CDT - The server is back online and operating as expected.

2016-06-06 :: DoS Attack
4:06 PM CST - We have taken care of the offending host and are implementing some additional measures to minimize potential future issues. 3:58 PM CST - We have seen another attack instance. We are working to quash this situation. 5:15 AM CST - We have identified the offending source and have implemented remedy. You should be seeing no further latency on the network at this time. 5:05 AM CST - We are currently working a large DoS attack on the network that is causing some significant latency across most segments of the network.
2016-05-25 :: Cloud 3 Maintenance May 27 10PM - May 28 3AM

We are planning a reboot of all virtual servers on the Cloud3 platform for a software update which will bring enhanced features and bug fixes starting at 12:01AM on May 28. The maintenance window is expected to wrap up by 3AM; tasks will occur between May 27 at 10PM, however the first 2 hours will not be service impacting.

You will see brief momentary interruptions and reboots of your virtual servers during these periods as host nodes are upgraded (virtual servers will not be down longer than it takes them to reboot and do disk checks). Virtual servers with IP's within the subnet will be affected.


All maintenance was successfully completed by 2AM May 28. All VM statistics such as bandwidth used and disk IOPS should now be functional.

2016-04-25 :: Cloud 1 Maintenance, April 25-30

Throughout the week of April 25-30 we will be performing the first phase of hypervisor maintenace on the "Cloud 1" infrastructure (servers with IP addresses in the 66.180.164.* range). We will be rebooting hypervisors after 9PM CDT, however all client servers that are hot migratable will be migrated prior to the reboots. Most client servers in this cloud are hot migratable so we do not expect downtime for most clients. If a client server cannot be hot migrated, the expected downtime is 3-5 minutes. The maintenance windows are intended to run from 9PM-12:01AM.

2016-04-11 :: Flex Dedicated Cluster 2 Maintenance
We will be doing maintenance on one Flex Dedicated Cluster 2 this evening. We are replacing the switch in this cluster. This may lead to an inability to reach your VM's for up to 2 minutes.
2016-03-20 :: Cloud 3 Upgrades - March 20

2:17AM CDT We are currently working through an issue with the vendor on a storage issue. Once this is resolved we will bring all cloud servers back online.

3:12AM CDT - We are performing the routine startup for the storage system. We will start the cloud servers within the hour.

4:04AM CDT - Client cloud servers are being brought online.

All but (2) client cloud servers were successfully brought online by 5:30AM CDT. We apologize for the extended maintenance window; during the window a bug was found in the vendor's storage code which needed to be fixed and then the storage system was scrubbed to ensure there were no errors. This final operation is what created the delay in bringing some cloud servers online.

2016-02-06 :: Cloud 3 Upgrades

Cloud3 will be undergoing an upgrade on Saturday February 14 and Saturday March 20 between the hours of 11PM CST and 5 AM CST. You may see brief momentary interruptions and reboots of your VM's during these periods as the compute nodes are upgraded. Servers with IP's within the subnet will be affected.

2015-12-30 :: VM Migrations
3:00 PM CST - Over the next 2 days/nights, we will be migrating affected VM's to alternate SAN storage. Depending on the size of your VM this could take from 5-30 minutes. The VM will need to be powered down during the migrate. We will work to minimize any downtime and will try to do it during off peak hours as we know there is no good time for down time. We apologize for the troubles.
2015-12-30 :: SAN Cloud One issues persist
As of about 8:30 AM CST service was restored to all VM's. We are working on a plan to get VM's moved off go this SAN. We are working on SAN01 of Cloud1 at this time. Once service is restored, we will be working to move affected vms from it to a replacement storage location.
2015-12-29 :: Cloud1 SAN Issues
7:45 AM CST - All VM's are operational on cloud1. If you are seeing any abnormalities, please let us know. We are working to get to the root cause of this issue. 6:55 AM CST - We are again seeing an issue with one of the SAN's on our cloud1 platform. This is affecting servers in the and subnet. We are working to remedy as quickly as possible and determine what needs to be done for long term stability.
2015-12-28 :: Cloud1 Issues
10:00 AM CST All servers with storage on the affected SAN are operational. If you find any anomalies, please let us know immediately. 8:00 AM CST We are seeing issues that seem to be pointing to the SAN at this time with VM's in the subnet. We are working to determine the cause and remedy as quickly as possible.
2015-12-27 :: Cloud1 instance swap device issues

Between 3:46AM and 5AM we noticed issues with some cloud server swap space. During this timeframe, we either rebooted any servers that were showing a kernel panic or remounted swap via 'swapoff -a; swapon -a' to remedy the issue. The small number of machines that kernel panicked were those that were already running low or out of physical RAM.

If you are running an instance with less than 512MB RAM and your server was rebooted, please contact us and schedule a RAM upgrade to enhance performance without relying on disk for swapping memory when memory is exhausted.

2015-12-24 :: Cloud1 instance issues
11:30 PM CST - This issue has been resolved. It turned out to be an issue with the backend reporting services. We corrected this issue and everything is running normal now. 10:45 PM CST We are currently working an issue with cloud1 which contains VM's in the subnet. There appears to be an issue with one of the SAN's causing problems for a number of vm's within this cloud.
2015-12-08 :: Cloud3 Final Maintenance
We are planning a quick reboot of all virtual servers on the Cloud3 platform starting at 11PM on December 12. The maintenance window is expected to wrap up by 1AM December 13. You will see brief momentary interruptions and reboots of your virtual servers during these periods as host nodes are upgraded (virtual servers will not be down longer than it takes them to reboot and do disk checks). Virtual servers with IP's within the subnet will be affected.
2015-12-06 :: Cloud3 Maintenance - CHANGE DATE

As of 12:03AM CST all maintenance has been completed and all virtual machines on cloud3 on subnet 66.180.167 are online.

Reboots for Cloud3 on Dec 5, 2015 has been postponed due to a vendor suffering an outage making the update codebase unavailable. The new date and time for this maintenance window will be Dec 6, 2015 between 10PM - 1AM. Virtual machines on the 66.180.167 subnet will see one server reboot during this window. The expected downtime will be the time it takes for the virtual machine to shutdown and reboot.

2015-11-28 :: Cloud3 Maintenance - UPDATE

Cloud3 will be undergoing maintenance and upgrades 12/5 between the hours of 9PM CST and 5 AM CST in lieu of the 11/28 date as was previously scheduled. On December 5, you may see brief momentary interruptions and reboots of your VM's during these periods as host nodes are upgraded. Servers with IP's within the subnet will be affected.

2015-11-21 :: Cloud3 Maintenance
Cloud3 will be undergoing maintenance and upgrades during the next three upcoming weekends on Sat 11/21 and 11/28 between the hours of 9PM CST and 5 AM CST. You may see brief momentary interruptions and reboots of your VM's during these periods as host nodes are upgraded. Servers with IP's within the subnet will be affected.
2015-09-12 :: Update: Temporary ICMP Blocking

7:08AM - The inbound ICMP block has been lifted and the DoS source mitigated. We apologize for the inconvenience for any inbound ping monitoring that triggered during this event.

All systems are operating normally.

2015-09-12 :: Temporary ICMP Blocking

5:50AM - while we are isolating a DoS attack, we will be disallowing ICMP into one of the cloud subnets. This will only affect pings to your servers on our cloud subnets.

2015-06-14 :: Internet Capacity Upgrades and Maintenance
6/15/15 - 2:38 CST - The capacity upgrades have been completed and the maintenance concluded. 6/15/15 Maintenance on the connectivity upgrades has commenced. You may see some brief periods of inaccessibility or latency during the maintenance window. On the night of Sunday June 14 going into Monday, June 15, 11:00 PM CST - 4 AM CST - we will be adding additional capacity from TimeWarner. An additional 1Gbps circuit will be added. During this time, we will also be moving around and cleaning up the connections from the upstreams that go into the router. This will cause some brief periods of inaccessibility of about 5 - 10 minutes for the physical connections as well as BGP convergence of routes and such.
2015-04-28 :: Datacenter Maintenance
Between 10 PM CST and 2 AM CST we will be doing some cable maintenance. In doing so, you may see brief momentary loss of connectivity lasting no more than 120 seconds. This is in preparation for some additional firewall enhancements as well as an increase in additional connectivity coming in the coming weeks.
2015-04-14 :: Network Latency
13:17 CST - A customer server was again found compromised early this afternoon causing significant latency across a couple of subnets on the network. We have placed this server in a sandbox so it can no longer effect others. We are working to implement additional safeguards that will prevent single servers having significant adverse effects outside of their subnets and such. We apologize for the trouble this caused.
2015-04-13 :: Compromised servers causing latency
11:50 AM CST - 2 customer servers that were compromised and initiating outbound attacks late this morning caused latency on several portions of the local network. The offending servers have been blocked. Normal traffic patterns and response should resume shortly.
2015-04-06 :: SAN Issue - Cloud1

4.6.15 1700 CST We have found the latency issue to be related to one of several SAN's. We are currently restarting VM's associated with this particular SAN. Some will require a FSCK. 4.6.15 1550 CST We are currently investigating a DoS attack affecting servers on in the 66.180.164 and 216.235.70 subnets.

As of 1930 CST, all but two of the affected VM's were back online.

2015-03-17 :: Latency - DoS Attack
11:55 AM CST Traffic flow is normal once again after mitigation of the DoS attack this morning. Please advise if you see any further latency issues. We apologize for the trouble this caused. 10:00 AM CST, we are currently working a DoS attack that is causing latency on the cloud1 platform. This is affecting servers in the subnet.
2015-03-12 :: Network Maintenance March 14 - March 15
On Saturday March 14 starting at 10:00 PM CST thru Sunday, March 15 4:00 AM CST, We will be conducting network maintenance specifically applying core router updates. You may see brief moments of latency or unreachability during this maintenance window as updates are implemented. Any questions or problems, let us know.
2015-02-24 :: Area Power Outage affecting Netsonic Offices
2.24.2015 15:15 CST We are currently experiencing a localized power outage in our area that includes our offices. This will affect your ability to reach our office personnel during this power disruption. The datacenter facility is currently running on backup power provided by our generator. Utility power was restored at approximately 18:30 CST.
2015-01-07 :: Cloud04 Server
1.7.2015 7:56 AM CST - The issue with the cloud04 server has been resolved. 1.7.2015 7:47 AM CST - We are currently working an issue with the cloud04 hosting server. Web services will be restored momentarily.
2014-11-16 :: Network / Gateway Router Maintenance - UPDATE #3

6:08AM - Service has been restored to the remainder of the servers affected outside the maintenance window. We will continue to monitor the situation. Our internal monitoring is indicating all servers are online and reachable. Please feel free to submit a support request in your portal if you believe you may be having issues with your server.

2014-11-16 :: Network / Gateway Router Maintenance - UPDATE

2:01AM - We are continuing to work through a hardware issue on the core switches. All but 40 host connections are accounted for. We apologize for the inconvenience and ask that you bear with us as we finish repairing the issue.