Application Replication and Consistency 
If it hasn’t happened to you (yet) you have heard the stories, the application server crashes and when the server comes back up, the application doesn’t.

Presumably the point of replicating data is that the replicated data is useful. For flat files that is not a big deal, the file is there or not. For application data that is a whole different issue.

The problem is applications have multiple files to update – database, log, index, etc and they have buffers, so the most recent data for one of those files might not be on disk, but in a buffer, so if you just replicate what is on the disk you don’t have a consistent recoverable state.

The reality is just sending and applying all changes in the exact order that they occurred on the production system, does not guarantee a crash -consistent copy of data on the secondary system. If that was enough, then the server crash would not be a problem. The only way to do that is to quiesce the application and flush the buffers to disk.

It could be you are lucky and the application is robust enough to recover from any inconsistencies, but are you prepared to bet the company (and your job) on it?

So you have a choice, write order replication and perhaps have to spend the time running some sort of application recovery process, but if you get it back then you have (hopefully) negligible data loss. Alternatively, use a mechanism to ensure application recovery without hassle and lose a bit of data. There is a trade off between RTO and RPO and you need to work out what is important in your environment.

[ add comment ]   |  [ 0 trackbacks ]   |  permalink  |  related link
Solidcore S3 Change Control and Security 
Managing change is an issue many of our clients are facing. We have been working with an organisation called Solidcore to deliver closed loop change control and security to server, workstation, network and database installations. The intrinsic problem with most change control systems are they are open loops and therefore uncontrolled.

The S3 product consists of a client application that installs on each server, once activated this prevents any and all changes to the server. A server manages and monitors all the servers and provides central management and reporting.

The Server software can be integrated into help desk/change control/remediation services to tie support tickets to change control for example.

So…

What does it look like?

On the servers running the client there is little or nothing to see, at least until you try and change something. Below is the message we receive when trying to delete the dialer.exe file in the C:\Windows directory.



As you can see, I am denied access from deleting the file. On the server I see the following:



So I tried to delete the file several times, so you can see that I get an alert each time someone tries to change/delete/rename/overwrite a file.

Obviously, there are times when you actually want to update your servers.
Below is the screen where I allow updates to a server:



You can see the padlock there showing that the box is presently secured. That changes to an open padlock once you click YES. This sort of task can be tied to your helpdesk solution.

S3 Control records all changes and attempted changes in a secure format, which can’t be altered, so helps address your compliance issues. This also helps with fault finding as looking through a centralised record of all changes is much easier than trying to work through system logs trying to work out what may have happened or affected the system or application, who was involved, etc.


For more info click here...

[ add comment ] ( 79 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Portlock Storage Manager 4.0 Released 
New in 4.0 is support for NetWare 6.5 Service Pack 6, software mirroring, support for Virtual Server and VMware, and support for EMC and NetWare multipathing.

Hindsight is a great thing, by using it is not a great plan. We all apply service packs, patches and updates with great regularity and most of the time all is fine with the world. HOWEVER, if only life was always so simple we wouldn’t get so many support calls “I’ve just done this and lost all my data, can you help recovery the system/get my data back/or whatever”. Often we can help, but it can be a lot of hassle, lots of down time, lots of annoyed users and expense.

Before you do anything to a system or storage, take an image and make sure you have a current backup. You know it makes sense ;-))


[ 1 comment ] ( 271 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Converting a Xen VM to a VMware ESX VM with VMware Converter 3 
We have several virtualisation host servers here in the office, XEN and VMware being the main two. As we do considerable testing, rebuilds, etc. We often move the virtual machines about between host servers. Typically we will move between ESX servers.

Recently we needed to move a virtual test server (VM) from a XEN host server, and decided to move it across to VMware ESX 3 (VI3) server we knew was under utilised.

We decided to use the VMware Converter software tool to do this and what follows is a brief description of how it went.

VMware Converter comes in two formats, as a bootable CD or as a Windows executable. The bootable CD allows you to move a machine without making any changes to system, whilst the EXE version requires you to install the software within the Windows on the VM first.

We went with the EXE version, mainly for convenience, after clicking the install exe the software installs like any other Windows application.

You then start the installed converter software with a double-click.

Click the IMPORT MACHINE button to start the wizard.
You’ll be prompted for the source type (local machine) and login (again local). Choose the drives you’d like to take with you.

Next, you choose your destination. This is either to ESX direct, or in our case the VMC server. At this stage we had to select the cluster, ESX host server and finally the datastore, where you’d like the VMDK etc. to reside.

You then need to select which network interfaces within VMware you want to associate with the NICs on the machine you are importing/converting/migrating.

Finally, decide it you want to install the VMware tools. We said yes to this, which is important later on, see gotcha 2.

VMware Converter now starts working, creating a VM on the ESX server and importing the machine. This took 10m 59s in our case, but admittedly it was pretty simple Windows 2003 server with only one Java application installed.

Once this process is complete you’ll be asked if you want to start the VM. I said no, shutdown the original server VM on the Xen server and then started the new VM on the ESX server.

Here is the first “Gotcha”. Our original server was a W2K3 server on a Xen host. Why does this matter, because the server defaulted to booting in “PV-ENABLED” mode. Which dies on ESX, this is because “PV-ENABLED” mode uses XEN drivers, etc. to support hardware based virtualisation using VT chips and the like, which are not supported in VMware obviously.

So we needed to manually select the non “PV-ENABLED” mode via the VMware console.

Gotcha 2 is that we told VMware converter to install the VMware tools into the new VM, which requires a reboot. So we had just managed to login and were working towards changing the boot options and the server restarted.

Once we rebooted (manually choosing non “PV-ENABLED” mode again) we logged in and changed the boot options by right clicking on the machine, choosing properties, clicking advanced, then settings under Startup & Recovery.
Change the default operating system away from the entry with “PV-ENABLED” in it. Click OK a couple of times and it’s done.

Pretty painless really.

References:

Editing boot.ini Windows 2003

VMware Converter


[ add comment ] ( 19 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Power consumption of servers, (VMware Cassatt Xen) 
We've written about power consumption of servers in the past and it is increasingly important issue both here in the UK and in the USA.

Treehugger.com recently reported on the report by Jonathan Koomey of Lawrence Berkeley National Laboratories. To quote their piece they say:

In the last five years US servers burned through 5 million kw of power – that’s the equivalent of five 1GW power plants, or more than “the total possible output from the Chernobyl plant” when it was working.




arstechnica cover the same report and also comment on how the US Environmental Protection Agency will soon begin a six-month study of power consumption in the data center.

It is increasingly clear that managing the power consumption of your enterprise is going to have to be on the IT department agenda.

Technologies such as Virtualisation from Xen, VMware, etc. will decrease the numbers of servers, whilst Cassatt Collage will help power down servers that are not needed based on the current load (and bring them back online when load increases of course).

WAFS can further assist the situation, by decreasing the number of servers and storage required in your branch offices.

Increasingly, "Carbon Footprint" is an issue that both public and private organisations must look at seriously. It will have a big impact on your business and of course on the environment.




[ add comment ] ( 79 views )   |  [ 0 trackbacks ]   |  permalink  |  related link

Back Next