The next five years will see agility become the primary measure of data-centre excellence. Analysts advised that through 2012 virtualisation will be the most significant factor on data centres. It greatly reduces the number of servers, space, power and cooling demands and ultimately enables agility.
"An agile data centre will handle exceptions effectively, but learn from exceptions to improve standards and processes," said Tom Bittman, Gartner vice-president and distinguished analyst. "Agility will become a major business differentiator in a connected world. Business agility requires agility in the data centre, which is difficult as many of the technologies for improving the intelligence and self-management of IT are very immature, but they will evolve over the next ten years."
Within the data centre, agility should be measured in terms that make sense to the business, such as the time and cost to deploy new servers, to install new software or to fix a problem.
Gartner defines agility as the ability of an organisation to sense environmental change and respond efficiently and effectively. However, no organisation will be agile if its infrastructure is not designed for agility. Mr Bittman said: "Agility is the right strategic balance between speed and operational efficiency."
As a core enabler of agility, virtualisation is the abstraction of IT resources in a way that makes it possible to creatively build IT services. While the vast majority of large organisations have started to virtualise their servers, Gartner estimates that currently only 6 per cent of the addressable market is penetrated by virtualisation, a figure set to rise to 11 per cent by 2009. However, the number of virtualised machines deployed on servers is expected to grow from 1.2 million today to 4 million in 2009.
"Virtualisation changes virtually everything," said Mr Bittman. He explained that it is not just about consolidation but also includes transitioning resource management from individual servers to pools, increasing server deployment speeds up to 30 times.
Virtualisation is a major enabler for infrastructure automation, and will help accelerate the trend toward IT operations process automation.
However, Gartner warned that tools alone are not a substitute for a good process and made the following recommendations to organisations planning or implementing virtualisation:
• When looking at IT projects, balance the virtualised and unvirtualised services. Also look at the investments and trade-offs;
• Reuse virtualised services across the portfolio. Every new project does not warrant a new virtualisation technology or approach;
• Understand the impact of virtualisation on the project's life cycle. In particular, look for licensing, support and testing constraints;
• Focus not just on virtualisation platforms, but also on the management tools and the impact on operations;
• Look at emerging standards for the management and virtualisation space.
Mr Bittman concluded: "IT organisations should have strategic plans in place that include agility improvements. Ultimately, agility requirements are determined and valued by the business."
Monitoring Virtual Environments
[ add comment ] ( 87 views ) | [ 0 trackbacks ] | permalink | related link
"There is a lot of talk by vendors about aligning IT with business objectives, which has mostly been just lip service," stated Nicola Sanna, president and CEO of Netuitive. "With Netuitive Service Analyser 2.0, business-to-IT alignment is truly built into the fabric of the product. Netuitive doesn't just have the BSM industry's only self-learning engine, it now has an enterprise-class interface and feature set that goes beyond what any other vendor is doing."
Netuitive Service Analyser is the only self-learning and continuously adaptive software for automating BSM that doesn't require correlation rules, scripts or manual thresholds. By automatically learning the performance patterns of an organisation's IT systems and services using statistical algorithms, Netuitive Service Analyser gives companies the ability to automatically track, correlate and understand any business and systems data, enabling organisations to see in real-time how infrastructure performance issues affect bottom line business performance. Netuitive Service Analyser 2.0 includes several new enhancements, making it the only solution that can automatically tie IT performance to business impact:
• Multiview Dashboard – Netuitive Service Analyser now provides three distinct performance perspectives – IT infrastructure, customer experience and business impact.</li>
• Real-time Service Level Monitoring – Companies can now monitor Service Level Objectives (SLOs) in context with infrastructure performance in real-time, enabling companies to tie performance to business metrics automatically.
• Alarm Prioritisation – Netuitive Trusted Alarms can now be prioritised according to criticality of the service. Alarms are categorised based on severity, type, health and availability, enabling companies to make informed business decisions in real-time.
• BSM Topology View – Netuitive Service Analyser 2.0 provides topology views of the business services, where users can intuitively view the relationships between service components. From an easy-to-navigate topographical service map, users can drill down to see how supporting components are performing.
[ add comment ] ( 87 views ) | [ 0 trackbacks ] | permalink | related link
Improve iShared performance across the WAN with local Active Directory authentication via a Domain Controller on the remote site.Recently we did some performance testing to prove the hypothesis that having a domain controller on a remote site will speed access to files on a Packeteer iShared cache. All of us in the office believed that having a domain controller onsite at the remote end of the WAN would speed things up, but we had never tested our belief.
So, we setup a test environment across a 256k WAN with about 150ms of latency and did some testing. The chart below shows our results:
What it shows pretty clearly is that there is a pretty good improvement in performance when there is a Domain Controller on the remote site.
Now with iShared being Windows based, what we are able to do is run DCPROMO on the iShared remote appliance itself and turn it into a Domain Controller.
having a DC on the remote site means that the Windows Active Directory Authentication traffic is not having to traverse the WAN. This is not large amounts of traffic, but is affected directly latency so the 150ms delay affects each authentication request, slowing things down.
What is not shown, primarily because we got bored waiting, is the time it took to load the file across the WAN without iShared,it was taking somewhere over 30 minutes to load as opposed to the 2 minutes with iShared! So we stopped counting after 30 minutes.
In our small test, we see a saving in time of about 20 seconds. Which means that each file is loading 20% faster than without a DC on the remote site. It does not take much imagination to see pretty big improvements in productivity. If you compared that to the 25+ minutes to load the same file without iShared WAFS it becomes even more impressive.
MS Domain Controller on remote sites
[ add comment ] ( 91 views ) | [ 0 trackbacks ] | permalink | related link
Using InMage DR-Scout planned failover between two servers for application recovery and business continuity.As we have discussed in our previous entry (LINK), InMage is a powerful tool able to provide continuous data protection (CDP), business continuity and application recovery including both data and the application itself.
You can very simply restore to any point in time (PIT), so you can restore lost files from a point in time or (and this is clever) an application specific consistency point, which could be something things as abstract as "Pre end of month invoicing run".
InMage can also provide more advanced recovery options, such as planned or unplanned failover between servers, which is useful for business continuity and application or data migrations.
In this entry we will talk about planned failover of our web based application, which might be useful if you need to do physical maintenance on the application server, but need the application to continue running. It might also be useful for testing disaster recovery (DR) procedures. As this can all be scheduled through the InMage web gui, you might schedule a planned failover in the middle of the night to minimise disruption to staff.
We are using the same test environment as used in our previous post. We have a pair of Windows 2003 servers, with the "source" server running a simple MySQL, Apache and PHP based web application. The target server does not have anything other than the InMage VX agent installed.
To complete a failover what we need to do is create a replication pair between the source servers volume and the target servers volume. This data is replicated initially and then as changes are written to the source volume, the changes (and only the changes, not the entire file again) are replicated to the target server. This change could be recovered by restoring a point 5 seconds before the time that the change was made using a point in time as your recovery point. However, as we can create unique user defined consistency points we can put a "tag" to easily identify a point in time we'd like to recover to. So we could put a consistency point called "Prior to updating to v2.45" or "Prior to purging dead contacts" or for our example "Demo Failover point". This allows us to easily find the the point in time we want to work from via the webinterface and recover to that time without having to use time and date.
The next step is then to make the data (and in this case the application) available to the users of the network. We are going to do this with the "push-button" application failover features of Inmage DR-Scout These allow us to do both data recovery and application recovery in an automated way, minimising human error. The actual steps we are going to take are...
1) Create a replication pair. In our case \\source\t: to \\target\s:
2) Create a consistency point, we are calling ours "Demo_Failover_Point"
3) "Recover" to \\target\t: at the consistency point.
4) Start MySql and Apache on the target server (they were restored along with the data in our example).
5) Use InMage to alter the network DNS settings to point the DNS entry for TARGET to point to SOURCE.
6) Stop Apache/MySQL on source server (optional)
Now, doing this is possible because InMage allows us to create scriptable failovers, these are over and above the the included pre-configured supported applications that include SQL Server, the System Registry and MS Exchange for example. We are able to script actions to occur prior and post data replication on both the target and source servers. In our example we create the consistency point on the source prior to replication, the recovery post replication onto the target. Stopping the Apache server on the source server is done afterwards also.
Once this is all configured; these steps (along with the ones not mentioned) are not in fact necessary, or visible, from a user perspective.
At this stage it becomes a "push button failover", in that you go into the InMage DR-Scout web based user interface, click on the failover you want (you can configure multiple different failovers) and click START. It's that simple, at that point DR-Scout goes away and gets on with the job. All you need to do is watch the status, maybe read the detailed log , and then access the application at the end to satisfy yourself the job is done.
Now, our configuration is probably not one you would use in production. We are replicating T -> S, which complicates the configuration and adds a recovery step that would not necessarily be required to provide a DR failover. If we simply replicated from T -> T all we would need to do is set the target volume to a Read Writeable state and start Apache and MySql. This would provide a faster response.
We have our configured as it is for two reasons.
1) Our recovery does not stop the replication pair. This is important to us as we demonstrate this failover and would prefer not to have to restart the replication from source to target every time. With our configuration, we can with a minimum of fuss failover and failover and failover again.
2) We can recover/failover to any consistency point with ease, again without breaking the replication pairing. So we are able to failover to the consistency point created specifically in this example, but we also have failovers configured to failover to consistency points in the past, so a point at midnight perhaps, or one just prior to running month end processing.
Once configured, InMage's application failover tools we think have great potential for use above and beyond disaster recovery.
The features could be used to allow you to maintain a test and development environment, "failing over" to a known good point, or standard build consistency tag. Support could "failover" to previous versions of applications for trouble-shooting with a client, then "Failover" to another version for another client with a couple of simple clicks of the mouse. Sales teams could "failover" to a clean demonstration state prior to doing client demonstrations.
Customised failovers do take some configuration time and testing, especially compared to the ease of using the failovers built in as standard such as MSSQL and Exchange failovers, where it all works "out of the box". Customising the failovers for your specific applications will take a little time to get right, but once in place doing even complicated failovers is, as the marketing puts it, a "push button" operation.
Using InMage DR Scout to protect Web applications
CDP or Replication for DR and Business Continuity?.
[ add comment ] ( 86 views ) | [ 0 trackbacks ] | permalink | related link
16 - 17 October 2007.
Earls Court, London
Solution Centre will be attending the IP07 Expo being held at Earls Court, London on the 16th and 17th of October 2007. We shall be there with our colleagues from Mazu Networks.
To quote the blurb on the IP07 website Mazu:
"Mazu Networks provides continuous global visibility into how users, applications, hosts, and devices are behaving on a network, and tells you how their current activity differs from their typical behaviour. Mazu's customers optimize their network infrastructure to support their business, secure their internal networks and maximize application availability.".
We really like the Mazu Networks products and are looking forward to showing them to even more people at the expo. If you are planning on attending the event, please let us know and please do come see us on stand 376.
The expo itself includes the "CIO Briefing" which is a summit event running along side the IP'07 exhibiton. This summit is divided into two clear areas of focus, one on "The Liberated Business" and the other on "How to deliver Performance, Access Control and Agility".
There are some really interesting speakers scheduled and a long list of exhibitors, including VMware, Xensource, Packeteer and Mazu Networks of course.
[ add comment ] ( 87 views ) | [ 0 trackbacks ] | permalink | related link