For a customer it could be interesting to enhance the backup and recovery strategy and to do the backup (or parts of the backup) to disk first and to copy the data to tape later. The main advantage is the reduced time it will take to recover a single file, as the restore from tape is allways slower. Another point is whe doing the backup for slow servers, tape drives are underutilizied and blocked; no other backups can be done during this time. Backup 2 disk is nothing new with Data Protector and available since years, however when looking at current developments like deduplication, you should have closer look at the different solutions from other vendors. The HP solution might be the most interesting vendor in this situation as software and hardware are very good combined. Other vendors may offer hardware only or even worse can provide bad implemented software, HP is one step ahead and is offering Store Once!
Below you can see how a D2D 2504i is implemented to be used by HP Data Protector. In general the guide below can be used for all HP D2D systems (D2D2502 / D2D2504, D2D4106 / D2D4112, D2D4312 / D2D4324), indepentent if you use iSCSI or FC protocol. As some errors can be done when you implement a D2D system within Data Protector (or any other backup software), it is highly recommended to read the best practice guide first. Only when following the guide you will reach the best backup performance and highest deduplication rate.
Schöne Übersicht. 🙂 Wirklich ganz nette Teile, aber in größeren Configs wird es schnell richtig teuer, vor allem durch die notwendige DP Advanced Backup to Disk License und Support. Aber trotzdem sehr nette Teile.
You forgot to mention that by licensing with ABD you could set up the drives as multipath devices so each client can send the data directly to the D2D – this means you would get the full bandwidth of the D2D instead of when shipping it via the backup server.
New firmware for the D2D also enables you to set up a virtual library with many drives = possible to setup private drives with explicit mappings for each server that would not be shared with drives to other servers = very easy scheduling inside DP.
The VL should be created with as many slots as possible and the media size kept as small as possible – for each environment type please also be aware that the dedupe only compares against data contained on media within the same virtual library, so to keep the speed high it would be better to have more than one virtual library defined for the same purposes – eg. one for vmware vstorage data and another for windows file data.
Another thing is to maybe mention that by setting higher blocksize then yes performance goes up (a little 😉 but the choice of restore methods goes down as well as lowering your dedupe ratio…
/Morten
Hi Morten,
You are right and I do fully agree. In my test scenario with 1 client and iSCSI it was not possible to show all the advantages and features of the D2D systems. When I do installations, e.g. D2D 4324 ( I did 2 large installations last 2 months) I normally follow the common guidlines as described in the best practice guide.
Regarding the slots I configure only as many slots as I would need to have my retention period mapped, which allows me to save space when a media is expired, as the used space on the D2D frees up when the media is reused.
For the blocksize, yes this is true, but only for enhanced automatic desaster recovery and performing a offline restore with the involved media agent (Windows).
Best regards
Daniel
Small media means many slots, keeping the media smaller means less risk of data with different retention ending up on the same media + faster recycle rates – its just common sense to put in more slots than you need -else it a real hazzle when you do a capacity upgrade and need to define the library inside DP again because the properties change too much in the library emulation so its not possible to do a slot expansion to the existing one – this kind of issue with “too much shuffling of element adresses” has been an issue in several previous firmwares on both VLS and D2D – however havent tested recently so maybe it is better now.
I like the D2D but am mostly working with VLS as the D2D doesnt scale enough in both capacity and performance. The VLS will give you less in dedup-ratio since the object level differencing will still not compare data from different clients and/or data-types against each other – but the restore speed is phenomenal compared to D2D, especially when it has been running in production for a while. VLS however is a bit more “old-school”-dedupe (OEM’ed from Sepaton) and HP knows this – which is probably also why they are close to launching a scale out appliance which is a D2D/Ibrix combo – that one will be very interesting if HP can get the pricing right!
/Morten
Btw did you hear anything about the project with building Storeonce into the DP file writer engine? They got very busy with that when vmware launched their latest version of VDR which supports block-level dedupe (but max. 1 TB datastores so doesn’t scale).
/BOFH
Yep, I heared about that, but as far as I know StoreOnce will be implemented into the DiskAgent part and released with a patch later this year. Thanks for your comments, very professional and good to read.