IBM this week announced the availability of their new IBM Information Archive Appliance. The appliance replaces IBM’s DR550.
The new appliance has significantly increased scale and performance because it’s built on IBM’s Global Parallel File System (GPFS), more interfaces (NAS and an API to Tivoli Storage Manager) and accepts information from multiple sources – IBM content management and archiving software and eventually 3rd party software.
Tivoli Storage Manager (TSM) is embedded in the appliance to provide automated tiered disk and tape storage as well as block-level deduplication. TSM’s block-level deduplication will reduce storage capacity requirements and its disk and tape management capabilities will let IT continue to leverage tape for long-term data retention.
All these appliance subcomponents are transparent to the IT end user who manages the appliance – he or she just sees one console where they define collections and retention policies for those collections.
It’s a solid (albeit a few years late) announcement from IBM. As a leader in content management and archiving software, it was always puzzling that IBM lacked an attractive archive storage platform to complement these offerings.
In addition, archiving is still a priority for both the business and IT. Business leaders remain concerned with archiving data appropriately for regulatory compliance and legal discovery. IT struggles to keep up with storage growth which continues to grow at 30%-40% each year and to control storage costs.
IT has always wanted to archive infrequently accessed data from expensive production storage platforms to less expensive archive disk and tape but their hands are tied unless business owners, legal and knowledge management professionals define appropriate retention policies as well as their functional requirements in a solution.
Which leads to a question that I’ve always struggled with myself? What is the role of the IT operations professional in archiving?
It’s clear that IT operations professionals are not responsible for defining the actual retention policies but what are IT’s responsibilities and how can IT influence business owners, legal, internal auditors and knowledge management? I’m still fleshing out the list responsibilities but here’s what I have so far:
- IT can recommend a retention policy for certain data sets based on an analysis of the storage environment. For example, if you’ve examined your environment and you’ve determined that users have not accessed 60% of your files in two years, these file are good candidates for archiving.
- IT researches, shortlists, and participates in the evaluation of archiving software or service based on the functional requirements that business, legal, audit, and knowledge management professionals define. This could include the evaluation of on-premise solutions, cloud-based solutions or a hybrid of both.
- IT researches, shortlists, and selects the appropriate storage archive to physically store the data. This could be a generic disk storage system or a purpose-built storage system and it also includes tape and even cloud storage services.
- IT defines the architectural requirements, scalability requirements, and performance requirements of the archiving software or service. At the end of the day, IT operations will manage the archiving solution end-to-end. IT operations will be responsible for maintenance of the archiving software, the database, operating system and the virtual or physical server that it runs on and of course, IT is responsible for the disk and tape that physically stores the archived data. This is one area where IT operations must really exert itself influence and its own requirements. Too often, performance, scale and system manageability are not taken into the selection of the archiving software or the determination of the total cost of ownership. When a software solution doesn’t scale or perform IT ops answers the calls from the help desk and is on the hook to solve it.
- IT is partly responsible for maintaining the long-term accessibility of the data in the storage archive. This issue here is the backwards compatibility of tape formats. Tape formats such as LTO are only backwards read compatible with the previous two tape format generations. This means that you’ll need to periodically recall tapes from the deep archive and migrate the data to the latest generation. This address tape media deterioration and tape format obsolescence but it doesn’t address the obsolescence of the application that generated the data. Call the knowledge management team for that one.
This is just a starting list. I’m interested to hear from other IT ops professionals if I’ve missed something or if you disagree with anything on my current list. I’m also interested to hear the struggles you have with archiving.
Check out Stephanie's research
You should follow me on here