Miyerkules, Oktubre 3, 2012

Go Daddy and Revolution: What would you do?


What can happen when the lights sought out all over the world and everything went dark? That’s the idea of the new NBC drama REVOLUTION that airs September 17th at 10 EST. Since Disaster Recovery is acceptable up our alley, we figure it’s worth visiting for our crew.

Yeah, yeah. It’s likely the entire globe won’t suddenly be devoid of all electricity, but imagine if - gasp - just the Internet went down? Can you even remember a time when we didn’t have the actual interwebs, not to say the smart phones that connect us in their mind 24/7. Would you panic? Would your heart start racing as soon as any one Go Daddy’s customers on September 10th when almost all their websites and hosting went down? Which had been virtual pandemonium. The exact help center  lines rang a fast busy all night, the machine was overloaded with terrorized business people amid an entire productivity meltdown that afternoon.

The Go Daddy incident (whether or not it was or wasn’t the task of Anonymous, Go Daddy’s claiming internal router issues) was small potatoes when compared with that which was unleashed recently in to the nuclear immune system of Iran. Proof positive by investing in a far more organized and well-funded effort, cyber threats to our critical infrastructure - water purifications, power generator and you never know what else - really are a very genuine thing.

What’s known as the Stuxnet virus, which according to the The big apple Times, is credited because the work of American and Israeli governments, is really a highly sophisticated computer worm established via a simple USB thumb drive. Stuxnet  infected the economic software and equipment running Iran’s nuclear centrifuge system and literally damaged the machines essential to Iran’s uranium enrichment program.

Thankfully for many American’s, the Go Daddy event is the closest widespread outage they’ve have you been subjected to (so we hope ever will and they had online data backup in play), but on Monday evening, we are able to all play just a little “what if.” (The pilot is on the NBC website here.)
Such as the girl right after this video, “I hope that never happens,” but we want to know, how had you been effected by the Go Daddy outage and - how would you react when the power sought out?

Sabado, Setyembre 8, 2012

Hurricane Isaac Update


Here is a live take a look at wind speeds as reported hint.fm/wind. This image shows the measured sustained wind speed in New Orleans at 10:00a.m. today.

Hurricane Isaac winds Aug 29, 10:00a.m.It's been a stressful couple of days for Global Data Vault, but situations are running smoothly. Since the storm approached, we reviewed the status of knowledge protection for all customers in the region that has been anticipated to be affected by Hurricane Isaac. We proactively contacted those which could possibly have needed help. We know their data protection schedules so we can compare the freshness of our own versions of the data to determine if support is needed. Because the storm hit, all data was current.
We’re already providing disaster recoveries to customers in New Orleans, and in some cases, we're also bringing their systems are now living in hosted environments on our Dallas data center. For the reason that storm progresses slowly, we plan to continue on this role not less than the subsequent few days.


The TV show Revolution – could it really happen?

Regarding green half a billion people lost power in India late last month. It had been very large electrical blackout ever affecting Ten % on the world’s population. The blackout extended almost 2,000 miles, as well as the impact was felt even within the usa since the data centers and India-based telemarketing companies were crippled and without power. Each of us haven’t experienced a blackout such as the one that occurred in India, the usa has our own share of electric company pressures. The continued ratchet up in power usage as well as the stress it places on our infrastructure is alarming. Having seen another notice from your state of Texas regarding stress put upon the ERCOT grid, and having lost power in our new office (NOT our DATA CENTER - see end of article on why we’re safe) 3 x within the first 5 months of transferring, one’s thoughts can’t help but look at the what if’s… and this foreboding twist is exactly the thought of NBC’s new televisionshow, Revolution. But could “Revolution” really happen? Or at a minimum the principle premise within the show the location where entire world goes dark in the total blackout?

Certainly there’s peace of mind in acknowledge that most data centers presupposed to have diesel powered generation capability backing them up. Each of them tout it as part of their marketing materials and claim to be infallible. Sure, that’s comforting, but consider what went down as soon as the Derecho storm ravaged Washington, DC - online retail giant Amazon.com’s data centers went offline leaving Netflix, Pinterest Instagram and Heroku inside of a panic. So… Is mostly a total blackout really that implausible? You function as the judge:

Granted this dramatic interpretation of what “could” happen is far fetched - certainly we’re not likely headed to total global blackout yet. But you can get legitimate concerns when protecting important computer data. Should the US got a massive outage, the particular grit of these diesel powered generators will surely be tested. If you experience an electricity failure that’s widespread, there’s someone, somewhere, who dropped some critical part of their data. It’s a safe and secure bet that net every data center that states be fully resilient really is. And even though their lights might well be on, their connectivity partner probably are not. And connectivity is also important when you’re evaluating the continuity of your data center or disaster recovery site.

Having power are some things. Having connectivity is actually. When your pipe to your data center is offline, same with your business. One fashion to further protect your corporation from a power interruption is usually to make certain your data center implements a Tier 1 network provider. Dig a little bit of deeper into the data center’s marketing material and determine what level of connectivity they prefer. Global Data Vault is contracted by way of a Tier 1 mobile phone network provider, Level 3 Communications. Tier 1 essentially considerably Level 3 doesn’t need to pay someone to talk with any two points around the world. In the event it thought has your body and mind whirling a tiny bit, here’s every link for a number of Tier 1 networks to be able to verify but if the data center is aligned with these tier 1 network providers, and whether Revolution is just another over-dramatized tv program.






Biyernes, Setyembre 7, 2012

A New SAN: The Gift that Keeps on Giving


You understand sense of excitement right after you will be making an acquisition of something you’ve had your skills on for a while and it’s tragedy in this now it’s yours? Well, our collective faces are grinning within the afterglow of the buying a brand new SAN (storagearea network) from HP. In the world of Global Data Vault, this SAN is the ultimate new hardware present we can easily give ourselves.

Let’s review this exciting technology and discuss how facilitates delivering better, faster, and even more reliable services for you.

In geek speak, SAN’s are enterprise-class, dedicated networks
offering entry to consolidated, block level data storage, and those are words that makes us happy. SAN’s are primarily used to make storage devices - like disk arrays - accessible to servers so that the devices appear as locally attached devices to your os. So aside from the colossal sophistication of what a SAN can do, it will allow Global Data Vault to take our level of service up a notch with added reliability, recovery speed, economies of scale and management features:

Reliability: It’s a dirty little secret among technology
service providers, but things break regularly. Premier data backup and disaster recovery companies (like Global Data Vault) have a huge system of fail safes when ever hardware goes bad in order that replacements are put into production seamlessly with out the first is effected. A SAN encapsulates most of the issues that will go awry and fixes every one of them on it's own, and creates incrementally better reliability. We like that.

Recovery speed: SAN’s facilitate
a bit more effective disaster recovery process. They may be better skilled at keeping multiple copies of customer data just because a SAN can span a distant location that houses another storage array. It enables storage replication through the disk array controllers, and that means after a disaster event, you are up and running faster than you might even think you’re ready for.

Economy of Scale: A SAN provides
more economical growth, allowing us to possess a much less expensive scalability option, which then we transfer those efficiencies in our customers. And that will give you something to smile about.

Management features: A SAN storage network simplifies storage administration
because it contains networking capabilities from it. Inside a lesser storage option, we would ought to logically separate the storage into different devices. The SAN manages to do it for us. Think of it in this way: A less sophisticated storage option would be just like having a parking zone loaded with a great deal of little buckets that we were pouring data into. A SAN allows us create highly virtualized storage pool that we can pour all the data into. The SAN is built to work out the separation logically plus much more effectively that people can ourselves.

We
opt for a SAN by HP for a number of reasons. Technical excellence, appropriate investment and superior support are keys. And yes it may be a “chicken vs. egg” thing, but we understand that although most business leaders ask the question, “How should i succeed?” the founders of HP instilled an alternative question in to the culture: “How will we contribute?”. Global Data Vault shares this approach and appreciate all the good it brings when our suppliers share this view as well.


Huwebes, Agosto 16, 2012

Building a Disaster Recovery Site - Part 3


Regardless if you are outsourcing your Disaster Recovery program or keeping it in-house, the steps required to implement it are critical. Within this series, we’ve drafted an overview of how to produce a remote DisasterRecovery site on your IT. With our previous articles one and two, we detailed steps 1 through 5:
Determine the company requirements for RTO and RPO
Determining the ideal location
Acquire and build the right platform at the Disaster Recovery location
Virtualization of the primary and backup environments and production systems
Moving the data
In this final installment, we’re investigating the final three steps:
6. Synchronizing the information
7. Creating the failover environment
8. Testing this program
Synchronizing the Data
The only way to synchronize the info will be to implement software or hardware to manage the process. The program application or hardware platform you choose to install will sit in the primary server environment and will also keep an eye on your whole data, continually monitoring it for changes. When it sees a difference, it knows to deliver the progress to the Disaster Recovery site.
This activity typically occurs on what’s known as “block level.” Block storage is normally abstracted by a file system or database management system in order to use by applications and end users. A block of data is the thing that your disk system writes. Think of it similar to this: Your operating-system includes a database that houses your entire data. When you hit “file à open” in Microsoft Word, then you certainly find the file you want, and you’re telling your os in this handset to spread out the file. The computer translates this activity towards a disk location where your data is stored internally, then your computer itself and file system move the read take a look at that location and this starts reading the information - and voila, it pulls along the file on your screen.The operation of synchronizing data will be an activity of monitoring every discrete storage destination for changes. Any time you maintain file with revisions, your operating system sees there is a “dirty block” and it also saves the new file returning to the redundant system on its next update (here’s where your RTO and RPO become important). The synchronization piece watches for changes, marks them as dirty, moves them with a set schedule and then you’re done.
Unfortunately, the software program that you to synchronize your computer data isn’t off-the-shelf software you could purchase pictures local Greatest coupe, you’ll have to purchase an enterprise class software application or even a hardware solution in the of a a number of providers in this particular space. One example of the an answer which GDV uses is Falconstor. In the hardware side, we implement HP LeftHand SANs to facilitate this process.
Creating the failover environment
Just because you’ve replicated your data, doesn’t mean you will need off and work. If only it were so simple! Given that you’ve replicated your computer data completely to another Disaster Recovery environment, you ought to be able to operate all your systems from this new Disaster Recovery environment.Obviously, all of your Disaster Recovery information is now on the new IP location. The redundant systems ought to be told to enter production and where the new data and transactions will likely be recorded. All of the access from the outside needs to be pointed on the new Disaster Recovery site, and also the redundant systems should be informed that they’re the live copy now, all before you’re operational again.
Pointing the access to the new Disaster Recovery site will be as easy as switching website names. You can move a web server just like you would move something as simple as a WordPress blog. The Internet understands your website is at a different address. The end user still travels to the main website, but they are now accessing the live Disaster Recovery site. In general, a domain name represents an Internet Protocol (IP) resource, for example a personal computer used to access the web, a server computer hosting a web site, or the web site itself or any other service communicated via the Internet. An example of a domain name is “www.google.com”.
Larger enterprises with internet based applications institute what’s called BGP Routing, which is a Border Gateway Protocol. This permits them to update their routing very quickly if any part of their infrastructure goes off line. They seamlessly go over for their Disaster Recovery site because the Internet routing happens using the speed of light. This known as “active-active.” The failover environment is easily ready; RTO and RPO are nearly non-existent.There are various solutions to accomplish the failover, two common strategies repointing your domain names or perhaps a more sophisticated route like BGP active-active scenario.
Testing the program
Now that you have your Disaster Recovery site built and also the failover in position, you have to check it frequently. If it’s not working correctly, you risk losing data and business resources.
At Global Data Vault, we test all our customer’s sites every 3 months - of course, if you’re building your very own Disaster Recovery site, we recommend you need to do the same. Test environments will be different according to what system you’ve implemented. For our protocol, we pull-up the newest data replica for the client by using them log into a designated portal. The portal takes your client on their systems that reside on Global Data Vault’s data centers, so they view and test the timeliness of the information. We have the customer record a transaction while they’re there to ensure its working. In case the RTO and RPO are just right, then we can rest easy.
As you can see from the 8 detailed measures in our three articles, making a remote Disaster Recovery site is no small feat. It’s mired with complexity that varies for each business and its requirements. For information on how Global Data Vault can assist with building your Disaster Recovery site, please contact us today.

Miyerkules, Agosto 15, 2012

Building a Disaster Recovery Site - Part 2


How much does it take to construct a remote Disaster Recovery site in your IT? (Part 2 of 3)Whether you're outsourcing your Disaster Recovery program or keeping it internal, the steps required to implement it are critical. In this series, we’ve drafted an outline of how to develop a remote Disaster Recovery site to your IT. In the previous article we detailed steps 1 and also:
Determine the business requirements for RTO and RPO
Determining the proper location Plus in this information we’ll examine steps 3 - 5:


3. Acquire or build the ideal platform at the Disaster Recovery location
4. Virtualization of the primary and backup environments and production systems
5.Moving your data

Building
the suitable platform:
After establishing your RTO (restore time objective) and RPO (restore point objective) requirements, you’ll require a solution to make restores happen. If you’re capturing transactions, one example is, you 'must' have an effective way to record those transactions in multiple places so that you will have the ability to bring these to the disaster recovery site. And, naturally they must be transferred to the disaster recovery site in a very timely enough basis to accomplish your RTO and RPO. You will want:

tools
which can replicate database stop speed communications amongst the sites that are reliable and can maintain replication into position and make it current
redundant storage

redundant processing power
the right way to switch between primary and backup easily and quickly without things going terribly wrong

the manpower
to create out infrastructure nearly twice in separate locations and then to keep the two sites synchronized

the mechanisms
constantly in place to see that whenever the principle is offline, switch the signal from the secondary, so when disaster ends, switch back to primary.

Virtualization
with the Environments The very best a great number of practical strategy to maintain ones sites synchronized is to use virtualization technology. The frequent and lots of changes very often occur just a server environment make sure it is much easier to have a virtual server than to maintain the software and configuration change requirements attendant to using a physical server.If you use separate physical servers, they will have likewise separate required maintenance activities. Great example: Microsoft issues patches nearly weekly on os. If you happen to don’t patch the secondary server with the same level that you choose to patch the chief server, that you are guaranteeing problems, including security problems, when you move to getting the DR environment.

Patches, upgrades, configuration changes etc., are too complex to accurately maintain on two physical servers. Disaster Recovery systems perform vastly better
in the virtual environment instead of physical environment, in case you virtualize both servers, you could maintain primary server and replicate the modifications onto the secondary.

Moving the data
The most important issue to address when moving
your details from a primary server to the secondary server is placed an operation that confirms exactly what modifications to the main environment will update towards the secondary regularly. Thus if your RTO is 4 hours, then your secondary has to update more often than that to keep pace and offer the opportunity to meet your RPO.

We’ll continue our discussion on building a remote Disaster Recovery site to meet your RTO and RPO objectives in our next article with: synchronization, the failover environment and testing.

Martes, Agosto 14, 2012

Building a Disaster Recovery Site - Part 2


What does it decide to try establish a remote Disaster Recovery site on your IT? (Part 2 of three)Whether you're outsourcing your Disaster Recovery program or keeping it in-house, the steps needed to implement it are critical. Within this series, we’ve drafted a plan of how to generate an isolated Disaster Recovery site for ones IT. In our previous article we detailed steps 1 and a pair of:

Determine
the company requirements for RTO and RPO
Determining the best location Plus this short article we’ll look at steps 3 - 5:


3. Acquire as well as build the ideal platform at the Disaster Recovery location
4. Virtualization in the primary and backup environments and production systems
5.Moving the results

Building
the appropriate platform:
After establishing your RTO (restore time objective) and RPO (restore point objective) requirements, you’ll need to have a strategy to make restores happen. If you’re capturing transactions, for example, you must have a method to record those transactions in multiple places so that you will manage to bring the crooks to the disaster recovery site. And, naturally carried out transferred to the disaster recovery site inside a timely enough basis to accomplish your RTO and RPO. You'll need:

tools
may possibly replicate databases quick communications concerning the sites that are reliable and can keep the replication ready and look after it current
redundant storage

redundant processing power
the right way to switch between primary and backup quickly without things going terribly wrong

the manpower
to produce out infrastructure nearly twice in separate locations and then to keep the two sites synchronized

the mechanisms
set up to view that whenever the principle is offline, exchange signal of the secondary, so when disaster is now over, switch time for primary.

Virtualization
within the Environments The top many practical technique to maintain ones sites synchronized is to use virtualization technology. The frequent and several changes that typically occur the next server environment allow it to much easier to conserve a virtual server rather than maintain software and configuration change requirements attendant to presenting an actual physical server. Ought to you use separate physical servers, they're going to in addition have separate required maintenance activities. Very good example: Microsoft issues patches nearly weekly on os. Once you don’t patch the secondary server to the same level that you will patch the principal server, you happen to be guaranteeing problems, including security problems, when you move to getting the DR environment.

Patches, upgrades, configuration changes etc., are too complex to accurately maintain on two physical servers. Disaster Recovery systems perform vastly better
during a virtual environment rather than physical environment, if you virtualize both servers, it is possible to maintain the primary server and replicate the modifications onto the secondary.

Moving the data
The most important issue to address when moving
your data from the primary server with your secondary server is placed a procedure that confirms everything that a change in the main environment will update on the secondary in a timely manner. If your RTO is 4 hours, then your secondary has to update more often than that to keep pace and offer the opportunity to meet your RPO.

We’ll continue our discussion on building a remote Disaster Recovery site to meet your RTO and RPO objectives in our next article with: synchronization, the failover environment and testing.