Using Packeteer iShared WAFS to improve application performance over the WAN. 
WAFS, is predominately a technology used by users. By this I mean that in a typical scenario, your users in the remote locations access files directly from the cache to get nice fast access to files from the central fileservers.

Recently however, we have worked with several firms to enable applications to access the cache rather than people. In one case we ran a live test where the cache was used exclusively by an application and users had no access to files on the cache whatsoever.

Overall, it has proven quite successful.
Caching data for applications provides the same speed benefits that user’s experience. We have seen the typical speed improvements that we have documented previously (previous blog entry) which have delivered improvements in performance for users but also shown measurable time savings with large processing jobs.

Here are a couple of examples:

1) Web-based knowledge/document management system.

A lot of clients we work with are using knowledge management systems, especially those involved with large client projects, such as design firms or in construction. Typically built on a platform with a web front end and database such as IIS and SQL, like SharePoint, these systems allow users to "check-in" and "check-out" documents via a client or web interface. These systems often maintain copies of the data at remote sites for users at say a construction site itself or in remote offices.
We have used iShared to maintain this data, which has reliably served the data quickly and saved time especially where the file would not be on the remote site normally. Most of these systems use simple mechanisms to transfer the data from the central data store to the remote data store, using full file copies and standard protocols such as HTTP or CIFS/SMB.
By using iShared we gained considerable speed improvements just in transferring these files as the iShared system uses several optimisation methods and its own efficient protocol to send the files.

We have struck some "issues" with some packages, as obviously they were not built to have a smart cache like iShared. On one occasion we experienced some file loss as a result of the knowledge base software deleting files prior to copying files from the data centre. This resulted in the files being deleted obediently by the iShared system from the data centre. Then the software tried to copy the file it had just requested be deleted... bang!

It highlights the need to be cautious when implementing WAFS for applications as unless you have a thorough knowledge of how WAFS works and how the application interacts, some subtle but catastrophic problems can arise.

2) Data processing and report runs.
Although it seems mad in this day and age, running batch jobs to process data and or produce reports is still a required evil. Often, the location needing/producing the reports is not the one producing the data. For example in the manufacturing sector, "head office" might be the ones needing the reports based on the data being produced at the "factory". Moving the report runs to the Factory might be one solution, but the reports would still need to be moved to the HQ at some point.
WAFs can help in this scenario too. iShared is able to optimise file transfers and is also able to optimise/accelerate application protocols also, say for example SQL traffic. We have done practical and lab experiments where we have optimised SQL queries (see here for some details) which have produced some marked improvements. We have worked on examples where data has been required to be exported to XML prior to importing into another application this again is handled well by iShared. The "byte-level differencing" in fact becomes a considerable optimisation when especially when exporting to the same file on a regular basis as only the new data added is transferred.

The biggest issues we have come across when using WAFS to optimise applications have been related to the internal working of iShared and the specific applications. We've discovered that running iShared in a "normal" configuration as you might get find "out of the box" is often not ideal and can in fact cause some of the issues that we have observed. This is because the application's behaviour has been designed to work with a cache (or at least with a cache that the application itself does not maintain). We have on occasion had to advise our clients that WAFS although providing some impressive speed gains would simply not be a sensible solution as we discovered an incompatibility in their application that would have caused serious problems.

If you are considering WAFS as a way to improve application performance you are probably on the right track, just be careful in terms of making sure you have a pretty detailed understanding of both your application and WAFS, as getting it right can take some tweaking and getting it wrong could be P45 territory.

Product Page:
Packeteer iShared WAFS

[ add comment ] ( 90 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Gartner Says Agility Will Become the Primary Measure of Data Centre Excellence by 2012  
Analysts Examine Business Agility and Data-Centre Virtualisation at Gartner's Data Centre Summit 2007, 22-24 October 2007, London

The next five years will see agility become the primary measure of data-centre excellence. Analysts advised that through 2012 virtualisation will be the most significant factor on data centres. It greatly reduces the number of servers, space, power and cooling demands and ultimately enables agility.

"An agile data centre will handle exceptions effectively, but learn from exceptions to improve standards and processes," said Tom Bittman, Gartner vice-president and distinguished analyst. "Agility will become a major business differentiator in a connected world. Business agility requires agility in the data centre, which is difficult as many of the technologies for improving the intelligence and self-management of IT are very immature, but they will evolve over the next ten years."

Within the data centre, agility should be measured in terms that make sense to the business, such as the time and cost to deploy new servers, to install new software or to fix a problem.

Gartner defines agility as the ability of an organisation to sense environmental change and respond efficiently and effectively. However, no organisation will be agile if its infrastructure is not designed for agility. Mr Bittman said: "Agility is the right strategic balance between speed and operational efficiency."

As a core enabler of agility, virtualisation is the abstraction of IT resources in a way that makes it possible to creatively build IT services. While the vast majority of large organisations have started to virtualise their servers, Gartner estimates that currently only 6 per cent of the addressable market is penetrated by virtualisation, a figure set to rise to 11 per cent by 2009. However, the number of virtualised machines deployed on servers is expected to grow from 1.2 million today to 4 million in 2009.

"Virtualisation changes virtually everything," said Mr Bittman. He explained that it is not just about consolidation but also includes transitioning resource management from individual servers to pools, increasing server deployment speeds up to 30 times.

Virtualisation is a major enabler for infrastructure automation, and will help accelerate the trend toward IT operations process automation.

However, Gartner warned that tools alone are not a substitute for a good process and made the following recommendations to organisations planning or implementing virtualisation:

• When looking at IT projects, balance the virtualised and unvirtualised services. Also look at the investments and trade-offs;
• Reuse virtualised services across the portfolio. Every new project does not warrant a new virtualisation technology or approach;
• Understand the impact of virtualisation on the project's life cycle. In particular, look for licensing, support and testing constraints;
• Focus not just on virtualisation platforms, but also on the management tools and the impact on operations;
• Look at emerging standards for the management and virtualisation space.

Mr Bittman concluded: "IT organisations should have strategic plans in place that include agility improvements. Ultimately, agility requirements are determined and valued by the business."

More Reading:

Cassatt Collage
VMware
XenSource
Virtuozzo
Monitoring Virtual Environments

[ add comment ] ( 87 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Netuitive Unveils Service Analyzer 2.0 with Real-time Service Level Monitoring. 
Netuitive have unveiled the newest version of its self-learning performance management software for BSM, Netuitive Service Analyser 2.0. This release significantly expands the enterprise feature set and promotes usability across organisational silos by analysing the environment from multiple perspectives: IT infrastructure, customer experience and business impact. In addition, Netuitive Service Analyser 2.0 now incorporates real-time service level monitoring, prioritised alarms according to business impact and BSM topology views.

"There is a lot of talk by vendors about aligning IT with business objectives, which has mostly been just lip service," stated Nicola Sanna, president and CEO of Netuitive. "With Netuitive Service Analyser 2.0, business-to-IT alignment is truly built into the fabric of the product. Netuitive doesn't just have the BSM industry's only self-learning engine, it now has an enterprise-class interface and feature set that goes beyond what any other vendor is doing."

Netuitive Service Analyser is the only self-learning and continuously adaptive software for automating BSM that doesn't require correlation rules, scripts or manual thresholds. By automatically learning the performance patterns of an organisation's IT systems and services using statistical algorithms, Netuitive Service Analyser gives companies the ability to automatically track, correlate and understand any business and systems data, enabling organisations to see in real-time how infrastructure performance issues affect bottom line business performance. Netuitive Service Analyser 2.0 includes several new enhancements, making it the only solution that can automatically tie IT performance to business impact:

• Multiview Dashboard – Netuitive Service Analyser now provides three distinct performance perspectives – IT infrastructure, customer experience and business impact.</li>
• Real-time Service Level Monitoring – Companies can now monitor Service Level Objectives (SLOs) in context with infrastructure performance in real-time, enabling companies to tie performance to business metrics automatically.
• Alarm Prioritisation – Netuitive Trusted Alarms can now be prioritised according to criticality of the service. Alarms are categorised based on severity, type, health and availability, enabling companies to make informed business decisions in real-time.
• BSM Topology View – Netuitive Service Analyser 2.0 provides topology views of the business services, where users can intuitively view the relationships between service components. From an easy-to-navigate topographical service map, users can drill down to see how supporting components are performing.


[ add comment ] ( 87 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Improve iShared performance across the WAN with local Active Directory authentication via a Domain Controller on the remote site. 
Recently we did some performance testing to prove the hypothesis that having a domain controller on a remote site will speed access to files on a Packeteer iShared cache. All of us in the office believed that having a domain controller onsite at the remote end of the WAN would speed things up, but we had never tested our belief.

So, we setup a test environment across a 256k WAN with about 150ms of latency and did some testing. The chart below shows our results:



What it shows pretty clearly is that there is a pretty good improvement in performance when there is a Domain Controller on the remote site.
Now with iShared being Windows based, what we are able to do is run DCPROMO on the iShared remote appliance itself and turn it into a Domain Controller.
having a DC on the remote site means that the Windows Active Directory Authentication traffic is not having to traverse the WAN. This is not large amounts of traffic, but is affected directly latency so the 150ms delay affects each authentication request, slowing things down.

What is not shown, primarily because we got bored waiting, is the time it took to load the file across the WAN without iShared,it was taking somewhere over 30 minutes to load as opposed to the 2 minutes with iShared! So we stopped counting after 30 minutes.

In our small test, we see a saving in time of about 20 seconds. Which means that each file is loading 20% faster than without a DC on the remote site. It does not take much imagination to see pretty big improvements in productivity. If you compared that to the 25+ minutes to load the same file without iShared WAFS it becomes even more impressive.

More reading:
iShared Page.
MS Domain Controller on remote sites
MS Authentication



[ add comment ] ( 91 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Using InMage DR-Scout planned failover between two servers for application recovery and business continuity. 
As we have discussed in our previous entry (LINK), InMage is a powerful tool able to provide continuous data protection (CDP), business continuity and application recovery including both data and the application itself.
You can very simply restore to any point in time (PIT), so you can restore lost files from a point in time or (and this is clever) an application specific consistency point, which could be something things as abstract as "Pre end of month invoicing run".

InMage can also provide more advanced recovery options, such as planned or unplanned failover between servers, which is useful for business continuity and application or data migrations.

In this entry we will talk about planned failover of our web based application, which might be useful if you need to do physical maintenance on the application server, but need the application to continue running. It might also be useful for testing disaster recovery (DR) procedures. As this can all be scheduled through the InMage web gui, you might schedule a planned failover in the middle of the night to minimise disruption to staff.

We are using the same test environment as used in our previous post. We have a pair of Windows 2003 servers, with the "source" server running a simple MySQL, Apache and PHP based web application. The target server does not have anything other than the InMage VX agent installed.

To complete a failover what we need to do is create a replication pair between the source servers volume and the target servers volume. This data is replicated initially and then as changes are written to the source volume, the changes (and only the changes, not the entire file again) are replicated to the target server. This change could be recovered by restoring a point 5 seconds before the time that the change was made using a point in time as your recovery point. However, as we can create unique user defined consistency points we can put a "tag" to easily identify a point in time we'd like to recover to. So we could put a consistency point called "Prior to updating to v2.45" or "Prior to purging dead contacts" or for our example "Demo Failover point". This allows us to easily find the the point in time we want to work from via the webinterface and recover to that time without having to use time and date.

The next step is then to make the data (and in this case the application) available to the users of the network. We are going to do this with the "push-button" application failover features of Inmage DR-Scout These allow us to do both data recovery and application recovery in an automated way, minimising human error. The actual steps we are going to take are...

1) Create a replication pair. In our case \\source\t: to \\target\s:
2) Create a consistency point, we are calling ours "Demo_Failover_Point"
3) "Recover" to \\target\t: at the consistency point.
4) Start MySql and Apache on the target server (they were restored along with the data in our example).
5) Use InMage to alter the network DNS settings to point the DNS entry for TARGET to point to SOURCE.
6) Stop Apache/MySQL on source server (optional)

Now, doing this is possible because InMage allows us to create scriptable failovers, these are over and above the the included pre-configured supported applications that include SQL Server, the System Registry and MS Exchange for example. We are able to script actions to occur prior and post data replication on both the target and source servers. In our example we create the consistency point on the source prior to replication, the recovery post replication onto the target. Stopping the Apache server on the source server is done afterwards also.

Once this is all configured; these steps (along with the ones not mentioned) are not in fact necessary, or visible, from a user perspective.

At this stage it becomes a "push button failover", in that you go into the InMage DR-Scout web based user interface, click on the failover you want (you can configure multiple different failovers) and click START. It's that simple, at that point DR-Scout goes away and gets on with the job. All you need to do is watch the status, maybe read the detailed log , and then access the application at the end to satisfy yourself the job is done.

Now, our configuration is probably not one you would use in production. We are replicating T -> S, which complicates the configuration and adds a recovery step that would not necessarily be required to provide a DR failover. If we simply replicated from T -> T all we would need to do is set the target volume to a Read Writeable state and start Apache and MySql. This would provide a faster response.

We have our configured as it is for two reasons.
1) Our recovery does not stop the replication pair. This is important to us as we demonstrate this failover and would prefer not to have to restart the replication from source to target every time. With our configuration, we can with a minimum of fuss failover and failover and failover again.
2) We can recover/failover to any consistency point with ease, again without breaking the replication pairing. So we are able to failover to the consistency point created specifically in this example, but we also have failovers configured to failover to consistency points in the past, so a point at midnight perhaps, or one just prior to running month end processing.

Once configured, InMage's application failover tools we think have great potential for use above and beyond disaster recovery.
The features could be used to allow you to maintain a test and development environment, "failing over" to a known good point, or standard build consistency tag. Support could "failover" to previous versions of applications for trouble-shooting with a client, then "Failover" to another version for another client with a couple of simple clicks of the mouse. Sales teams could "failover" to a clean demonstration state prior to doing client demonstrations.

Customised failovers do take some configuration time and testing, especially compared to the ease of using the failovers built in as standard such as MSSQL and Exchange failovers, where it all works "out of the box". Customising the failovers for your specific applications will take a little time to get right, but once in place doing even complicated failovers is, as the marketing puts it, a "push button" operation.

Further reading:
DR-Scout Page
Using InMage DR Scout to protect Web applications
CDP or Replication for DR and Business Continuity?.


[ add comment ] ( 86 views )   |  [ 0 trackbacks ]   |  permalink  |  related link

Back Next