IT Departments – the least automated part of an organisation? 
There is a cracking blog by Ken Oestreich http://fountnhead.blogspot.com/2007/05/ ... -2007.html relating to Forrester Research’s 2007 IT Forum, which stimulated some interesting thoughts. Sounds like much of the conference was about aligning IT with the business (no brainer) summed up by CEO, George Colony "There are no IT projects anymore, just Business Projects"

Ken quoted from a presentation given by Robert Beauchamp, CEO of BMC software. Like the cobblers children with no shoes - IT (alright, BT) {Business Technology) organizations in enterprises are arguably the least automated departments around. ERP is automated. Finance is automated. Customer interaction is automated. But IT is still manually glued-together, with operations costs continuing to outpace capital investments.

To reinforce the point he showed a graph from IDC showing the OpEx costs are rising at an alarming rate, reported elsewhere to be twice the IT spend.



So, there are two thought chains that fall out of those observations. The first is this idea of the shoeless children. That is just so true!! We work with all this high technology, which helps (or in some cases allows) the rest of the organisation do their thing in an automated and effective way as possible and yet we shuffle CDs, or manually deploy images, or at best have some provisioning tool to help build servers, but we still have to fiddle with the VLANs and the LUN masking or whatever.

Purchase orders for materials can be automatically placed, based on stock levels and sales forecasts, yet we get a call in the middle of the night because the Exchange server has fallen over.

We (hopefully) get paid automatically at the end of the month, but if the application is resource hungry it usually requires someone to set up a new box or tweak some share settings or similar.

Much of what we do is pitifully bad. It can take days to manually build a server and get it fully patched, mounted in the network with the right backup agents, AV, etc. When something goes wrong we spend ages trawling through logs trying to work out what changed. Alarms go of all over the place for no obvious reason other than something has crossed some arbitrary threshold. No wonder running costs for IT departments are so high. For some people, this must allow for the buzz of a big empire, but from the companies point of view it is bad news. If this was the right way to do things they would still have accountants handwriting ledgers in double entry bookkeeping. There is a reason no one would think such a thing is reasonable. It is NOT reasonable.

Even worse when you get round to things like asset utilisation. Most non virtualised environments run at something like 10% utilisation. Even virtualised ones (from the feedback we get) only run at 20) to 30%. Lets be generous and say a good department runs at 50% utilisation (if that is you, please let us know – we would love to know how you do it). Imagine the fuss if half the people in the organisation spent their entire “working” lives sat in the canteen drinking coffee? Or the airline that kept half their fleet on the ground?

What are the options to put shoes back on the children’s feet? Having acknowledged the problem, presumably the objective is threefold:

• Get better asset utilisation
• Reduce fire fighting and evolve proactive management
• Automate, like the rest of the organisation

For the first, virtualisation is a natural consideration and a look at our study on consolidation and virtualisation might be food for thought.

For the second, everyone spends loads of money on monitoring systems that tell them things they don’t care about and send them on wild goose chases. Getting the false alarms under control means you can do something constructive with the time freed up.

There are lots of solutions for provisioning, but that is not really the issue. The real need is to define the service level the business needs and having the system do whatever it needs to provide that service level, without human intervention.

[ add comment ] ( 86 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Consolidate - Reduce Datacentre Electricity and Ownership Costs Dramatically 
Some real life analysis revealed some significant figures and potential improvements.

The original environment was 40 or so HP servers with about 2TB of direct attached storage (DAS). A fairly typical environment, with file and print servers, domain controllers and application servers, like SQL each running on separate servers.

Traditional Like for Like Replacement

Procuring 40 or so replacement servers with appropriate direct attached storage for a ballpark cost of £111,280, plus £11,200 per year hardware maintenance would be the standard approach. We estimate that 40 servers would consume 176,952kWh electricity per year and would generate 49,480 BTU of heat, giving an electricity bill of about £15,000 per year. So just to buy them and power them for 5 years would cost about a quarter of a million pounds (£250,000).

By its nature, such a system will always be over provisioned (if it is not it leads to even bigger problems). Most research shows 20% is a good level of utilisation for such a system and 10% is not unusual, so potentially 90% of the asset is wasted. It gives little flexibility for future, unforeseen changes in requirement and recovery requires one for one replacement and rebuild, meaning average time to get a system back can run in to days.

The Alternative

Going to a centralised storage system allows for much greater flexibility, eases management burden and significantly improves asset utilisation. The immediate need could be solved with a decent, scalable NAS system, such as the agámi unit. agami’s IP SAN capability is used for application server storage.

The file systems can be replicated to another agámi device, in a different physical location without needing to take the servers offline. This provides a robust disaster recovery solution.

Server consolidation through virtualisation. Virtualisation brings a significant advantage of asset utilisation along with greater flexibility and speed of deployment and dramatic improvements in recoverability – where a system could be recovered nearly instantly, rather than days. In this case the workload was not huge and a few well spec’d boxes, such as top of the range DL380s or DL580s were sufficient.

Outcome

A ballpark cost for a couple of servers and a storage system, would be £26,260, plus £2,160 per year hardware maintenance. There would of course be a cost for the virtualisation software and there is a significant variation between the three products, but assume perhaps £2,000. We estimate that the servers and storage would consume 3,311kWh electricity per year and would generate 5,590 BTU of heat, giving an electricity bill of about £1,596 per year. So just to buy them and power them for 5 years would cost about a £37,000, giving a saving of over £200,000 for the equivalent 5 year period, including a 90% saving in electricity alone.

Summary:
• Improved flexibility
• Better asset utilisation
• Reduced management overhead
• Saving 90% electricity utilisation
• Saving £200,000 over 5 year anticipated server life

...Read the full story

[ add comment ] ( 83 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Application Acceleration - It's easy, right? 
Applying acceleration without understanding the application is a high risk, wasteful strategy. What’s the point of accelerating non-business critical traffic and diminishing the performance of VoIP and ERP in favour of Doom? How many IT Managers would be well regarded for such an action?

Applying acceleration without understanding the application is a high risk, wasteful strategy. What’s the point of accelerating non-business critical traffic and diminishing the performance of VoIP and ERP in favour of Doom? How many IT Managers would be well regarded for such an action?

It is like most things, if you can’t measure it, you can’t make reasonable decisions and apply controls.

The next issue is how you apply acceleration. You can’t use the same techniques for all traffic types and expect to get optimal results across the board. Bulk transfers, like CIFS and NFS traffic is best handled by disk based compression and reduction techniques, but transactional or real time data like VoIP and video is better handled in memory on a real time operating system. So multiple systems are needed to handle the different acceleration technologies for optimal results on the different traffic types.

Then there is the issue of QoS. This is one of those terms that means so many different things to so many people. The marketing people have a field day. They say QoS and everybody puts their own interpretation on it. In reality:

You can’t have sensible QoS and queues. With queues eventually packets will be thrown away and if that is the only control mechanism you have then you are bound to end up with congestion and retransmission.

• You can’t have sensible QoS without a deep layer 7 understanding of the application - not all port 80 traffic is created equal – the ERP traffic is probably more important than browsing the favourite football site (perhaps?)

• You can’t have sensible QoS without control of inbound as well as outbound traffic. Most WAN links are over subscribed – a head office with a 20Mbps line and 20 branch offices with 2Mbps lines has a problem, what is the point of the branch office sending the packets to the head office, for the over subscribed link there to just throw the packets away. You need to not send them in the first place until the head office is in a position to receive them; otherwise you just retransmit and make the whole situation worse.

Acceleration is more than just stuffing packets down a pipe as quickly as possible; to do it right needs an intelligent solution that applies the right technologies to the right applications. After all, delivery of the application is the reason for having the network in the first place, so using a shotgun rather than a needle may give results, but perhaps not the best results if you are trying to remove a splinter from your finger!!!

[ add comment ] ( 78 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
Consolidated SAN and NAS. Agami 
If “stuff” was all created equal life would be a lot easier, but life is not like that. You have file data and application data. Most application data is block based, while file data, perhaps not surprisingly, is file based (i.e. not block based). The differentiation is one of the major differences between SAN and NAS. SANs provide block based storage and NAS provides file based. That’s a pain, so you need a NAS box for files and a SAN for applications, that’s two lots of storage to manage. It would be so much easier if everything could be in one place.

There are two approaches to a workaround:

- You can put in a SAN to provide block capacity and then carve off some of that space and assign it to a NAS gateway device. These can often give enhanced management features for the file data, such as replication and snapshot, but there are usually two management interfaces, one for the SAN and one for the NAS gateway.

- You can use a suitable NAS box which provides suitable connectivity and allows you to allocate some volumes to SAN (block based). Some might argue that you take a performance hit on the SAN space because you don’t have direct access to the disk, but in most cases that is pretty negligible and is usually easily outweighed by the benefits of universal replication and snapshots through a single management interface for both block and file capacity.

If you look at the agámi stuff you will see it provides blistering performance, serving more than 1GBps (that is gigabyte, not bit). It allows for things like over 1,000 snapshots per file system (NAS) or volume (SAN) as well as replication of both data types either synchronously or asynchronously.

Not only is it screamingly fast, it is (relatively) cost effective when compared to others in the market and more compact and uses less electricity, so making it cheaper to run. It is not often in our industry you find something faster, smaller, cheaper and more cost effective (but still with a decent pedigree) that the dominant market leaders, but this might well be the one.

http://www.solutioncentre.co.uk/info/agami/



[ add comment ] ( 78 views )   |  [ 0 trackbacks ]   |  permalink  |  related link
XenEnterprise 3.2 Released 
New features include:

Support for Windows 2000 virtual servers – enabling consolidation of the vast majority of deployed Windows server workloads.

Multi-processor support (SMP) for Windows Server 2003 and Windows XP guests – delivering scalable virtualization of Exchange, SQL Server, and other multi-threaded and compute-intensive applications.

Improved Windows guest support – providing accelerated network performance, ability to suspend/resume virtual machines, up to 8GB RAM per Windows guest, and signed drivers with WHQL certification.

iSCSI SAN support – delivering affordable networked storage support.

VLAN trunk support for virtual bridges – providing network traffic isolation.

CPU, memory, disk and network resource control – enabling IT organizations to deliver more server resources to the highest-priority workloads.


[ add comment ] ( 79 views )   |  [ 0 trackbacks ]   |  permalink  |  related link

Back Next