Netuitive Unveils Service Analyzer 2.0 with Real-time Service Level Monitoring. 
Netuitive have unveiled the newest version of its self-learning performance management software for BSM, Netuitive Service Analyser 2.0. This release significantly expands the enterprise feature set and promotes usability across organisational silos by analysing the environment from multiple perspectives: IT infrastructure, customer experience and business impact. In addition, Netuitive Service Analyser 2.0 now incorporates real-time service level monitoring, prioritised alarms according to business impact and BSM topology views.

"There is a lot of talk by vendors about aligning IT with business objectives, which has mostly been just lip service," stated Nicola Sanna, president and CEO of Netuitive. "With Netuitive Service Analyser 2.0, business-to-IT alignment is truly built into the fabric of the product. Netuitive doesn't just have the BSM industry's only self-learning engine, it now has an enterprise-class interface and feature set that goes beyond what any other vendor is doing."

Netuitive Service Analyser is the only self-learning and continuously adaptive software for automating BSM that doesn't require correlation rules, scripts or manual thresholds. By automatically learning the performance patterns of an organisation's IT systems and services using statistical algorithms, Netuitive Service Analyser gives companies the ability to automatically track, correlate and understand any business and systems data, enabling organisations to see in real-time how infrastructure performance issues affect bottom line business performance. Netuitive Service Analyser 2.0 includes several new enhancements, making it the only solution that can automatically tie IT performance to business impact:

• Multiview Dashboard – Netuitive Service Analyser now provides three distinct performance perspectives – IT infrastructure, customer experience and business impact.</li>
• Real-time Service Level Monitoring – Companies can now monitor Service Level Objectives (SLOs) in context with infrastructure performance in real-time, enabling companies to tie performance to business metrics automatically.
• Alarm Prioritisation – Netuitive Trusted Alarms can now be prioritised according to criticality of the service. Alarms are categorised based on severity, type, health and availability, enabling companies to make informed business decisions in real-time.
• BSM Topology View – Netuitive Service Analyser 2.0 provides topology views of the business services, where users can intuitively view the relationships between service components. From an easy-to-navigate topographical service map, users can drill down to see how supporting components are performing.


[ add comment ] ( 87 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Improve iShared performance across the WAN with local Active Directory authentication via a Domain Controller on the remote site. 
Recently we did some performance testing to prove the hypothesis that having a domain controller on a remote site will speed access to files on a Packeteer iShared cache. All of us in the office believed that having a domain controller onsite at the remote end of the WAN would speed things up, but we had never tested our belief.

So, we setup a test environment across a 256k WAN with about 150ms of latency and did some testing. The chart below shows our results:



What it shows pretty clearly is that there is a pretty good improvement in performance when there is a Domain Controller on the remote site.
Now with iShared being Windows based, what we are able to do is run DCPROMO on the iShared remote appliance itself and turn it into a Domain Controller.
having a DC on the remote site means that the Windows Active Directory Authentication traffic is not having to traverse the WAN. This is not large amounts of traffic, but is affected directly latency so the 150ms delay affects each authentication request, slowing things down.

What is not shown, primarily because we got bored waiting, is the time it took to load the file across the WAN without iShared,it was taking somewhere over 30 minutes to load as opposed to the 2 minutes with iShared! So we stopped counting after 30 minutes.

In our small test, we see a saving in time of about 20 seconds. Which means that each file is loading 20% faster than without a DC on the remote site. It does not take much imagination to see pretty big improvements in productivity. If you compared that to the 25+ minutes to load the same file without iShared WAFS it becomes even more impressive.

More reading:
iShared Page.
MS Domain Controller on remote sites
MS Authentication



[ add comment ] ( 91 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Using InMage DR-Scout planned failover between two servers for application recovery and business continuity. 
As we have discussed in our previous entry (LINK), InMage is a powerful tool able to provide continuous data protection (CDP), business continuity and application recovery including both data and the application itself.
You can very simply restore to any point in time (PIT), so you can restore lost files from a point in time or (and this is clever) an application specific consistency point, which could be something things as abstract as "Pre end of month invoicing run".

InMage can also provide more advanced recovery options, such as planned or unplanned failover between servers, which is useful for business continuity and application or data migrations.

In this entry we will talk about planned failover of our web based application, which might be useful if you need to do physical maintenance on the application server, but need the application to continue running. It might also be useful for testing disaster recovery (DR) procedures. As this can all be scheduled through the InMage web gui, you might schedule a planned failover in the middle of the night to minimise disruption to staff.

We are using the same test environment as used in our previous post. We have a pair of Windows 2003 servers, with the "source" server running a simple MySQL, Apache and PHP based web application. The target server does not have anything other than the InMage VX agent installed.

To complete a failover what we need to do is create a replication pair between the source servers volume and the target servers volume. This data is replicated initially and then as changes are written to the source volume, the changes (and only the changes, not the entire file again) are replicated to the target server. This change could be recovered by restoring a point 5 seconds before the time that the change was made using a point in time as your recovery point. However, as we can create unique user defined consistency points we can put a "tag" to easily identify a point in time we'd like to recover to. So we could put a consistency point called "Prior to updating to v2.45" or "Prior to purging dead contacts" or for our example "Demo Failover point". This allows us to easily find the the point in time we want to work from via the webinterface and recover to that time without having to use time and date.

The next step is then to make the data (and in this case the application) available to the users of the network. We are going to do this with the "push-button" application failover features of Inmage DR-Scout These allow us to do both data recovery and application recovery in an automated way, minimising human error. The actual steps we are going to take are...

1) Create a replication pair. In our case \\source\t: to \\target\s:
2) Create a consistency point, we are calling ours "Demo_Failover_Point"
3) "Recover" to \\target\t: at the consistency point.
4) Start MySql and Apache on the target server (they were restored along with the data in our example).
5) Use InMage to alter the network DNS settings to point the DNS entry for TARGET to point to SOURCE.
6) Stop Apache/MySQL on source server (optional)

Now, doing this is possible because InMage allows us to create scriptable failovers, these are over and above the the included pre-configured supported applications that include SQL Server, the System Registry and MS Exchange for example. We are able to script actions to occur prior and post data replication on both the target and source servers. In our example we create the consistency point on the source prior to replication, the recovery post replication onto the target. Stopping the Apache server on the source server is done afterwards also.

Once this is all configured; these steps (along with the ones not mentioned) are not in fact necessary, or visible, from a user perspective.

At this stage it becomes a "push button failover", in that you go into the InMage DR-Scout web based user interface, click on the failover you want (you can configure multiple different failovers) and click START. It's that simple, at that point DR-Scout goes away and gets on with the job. All you need to do is watch the status, maybe read the detailed log , and then access the application at the end to satisfy yourself the job is done.

Now, our configuration is probably not one you would use in production. We are replicating T -> S, which complicates the configuration and adds a recovery step that would not necessarily be required to provide a DR failover. If we simply replicated from T -> T all we would need to do is set the target volume to a Read Writeable state and start Apache and MySql. This would provide a faster response.

We have our configured as it is for two reasons.
1) Our recovery does not stop the replication pair. This is important to us as we demonstrate this failover and would prefer not to have to restart the replication from source to target every time. With our configuration, we can with a minimum of fuss failover and failover and failover again.
2) We can recover/failover to any consistency point with ease, again without breaking the replication pairing. So we are able to failover to the consistency point created specifically in this example, but we also have failovers configured to failover to consistency points in the past, so a point at midnight perhaps, or one just prior to running month end processing.

Once configured, InMage's application failover tools we think have great potential for use above and beyond disaster recovery.
The features could be used to allow you to maintain a test and development environment, "failing over" to a known good point, or standard build consistency tag. Support could "failover" to previous versions of applications for trouble-shooting with a client, then "Failover" to another version for another client with a couple of simple clicks of the mouse. Sales teams could "failover" to a clean demonstration state prior to doing client demonstrations.

Customised failovers do take some configuration time and testing, especially compared to the ease of using the failovers built in as standard such as MSSQL and Exchange failovers, where it all works "out of the box". Customising the failovers for your specific applications will take a little time to get right, but once in place doing even complicated failovers is, as the marketing puts it, a "push button" operation.

Further reading:
DR-Scout Page
Using InMage DR Scout to protect Web applications
CDP or Replication for DR and Business Continuity?.


[ add comment ] ( 86 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
IP07 Expo - Meet us there with Mazu Networks. 
http://www.ipexpo.co.uk/
16 - 17 October 2007.
Earls Court, London
Stand 376


Solution Centre will be attending the IP07 Expo being held at Earls Court, London on the 16th and 17th of October 2007. We shall be there with our colleagues from Mazu Networks.

To quote the blurb on the IP07 website Mazu:
"Mazu Networks provides continuous global visibility into how users, applications, hosts, and devices are behaving on a network, and tells you how their current activity differs from their typical behaviour. Mazu's customers optimize their network infrastructure to support their business, secure their internal networks and maximize application availability.".

We really like the Mazu Networks products and are looking forward to showing them to even more people at the expo. If you are planning on attending the event, please let us know and please do come see us on stand 376.

The expo itself includes the "CIO Briefing" which is a summit event running along side the IP'07 exhibiton. This summit is divided into two clear areas of focus, one on "The Liberated Business" and the other on "How to deliver Performance, Access Control and Agility".

There are some really interesting speakers scheduled and a long list of exhibitors, including VMware, Xensource, Packeteer and Mazu Networks of course.

[ add comment ] ( 87 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Should I replicate data or use continuous data protection? 
When your bits are on the line and the whole world is shouting at you to get the system backup; that could rapidly become a very critical decision!

Remembering the whole point about any such protection scheme is the restores/recovery/continuity benefits; that must be the most significant consideration.

With any protection scheme there is always a trade off between cost and functionality. So it’s vital to understand how the business sees the risk and what service levels they require. The reality is that hardware failure, human error, software corruption and viruses/malware are the most common causes of data loss.

Most organisations can tolerate a few minutes or tens of minutes of data loss and the majority can be without some of their systems for a few minutes (we were told of one a few weeks ago who were happy to be without Exchange for a week!! There is always the extreme).

Putting it in context, until recently most people were doing daily backups, so had 24 hours of data at risk and restore times are horrible, picking on Exchange again find a new box, put the OS on, add it to the domain, set it up the same as the previous machine, put Exchange back, restore the database, probably run a consistency check, run the logs and so on, a couple of days could easily have passed by.

So, with a dose of realism, ten minutes either way is not a big deal (usually).

Going back to the original question, replication or continuous data protection?

Replication, however achieved, implies the copy is a “replica” of the original data, so corrupt the data and corrupt the replica. Ah, two copies of corrupt data, that’s not good.

Continuous data protection on the other hand implies that each change is saved as a separate change, so you can go back to any point in time. In reality, lots of so called continuous data protection systems don’t work like that. They simply replicate then take snapshots on some schedule. So I’m talking about grown up continuous data protection, not the “pseudo” version, where marketing hype takes over.

Having a local copy of data for those day to day annoyances – “I deleted the wrong file, can I get it back please”, or some random data corruption incident, is really useful. Having an off site continuous data protection copy is great for site disasters, especially if you have (perhaps) virtual servers set up on the remote site ready to mount the backup version. There is no reason why such a system should not have you back up and running in 10 minutes with negligible data loss.

[ add comment ] ( 19 views )   |  [ 0 trackbacks ]   |  permalink  |  related link

Back Next