![]() ![]() That would avoid any issues you would have with the backup failing because of an issue on the network. The ideal solution would be to backup locally and then use robocopy, xcopy or some other copy program to copy the file up to the network share. I would think the network latency on 4 threads would cause more problems than a single thread. I would be curious to find out why backing up 60GB across the network in 4 separate threads is okay and works, but backing up 60GB using a single thread is not. How have you determined that backing up 10-15GB is okay, but larger is not okay? You are going to have to tell us what problem you are actually having with backing up across the network. Any one of these tools will reduce the impact on your network and can only help. I would also be looking at using a third-party backup solution like Hyperbac, Litespeed, Redgate's SQL Backup, etc. ![]() The 800GB data warehouse compresses down to a couple hundred GB's and takes 4.5 hours mostly due to the limitations of the machine running our DW.įor your problems, I would definitely be looking at how the network is setup and configured. The 450GB database using Litespeed compression - compresses down to less than 70GB and backs up in under 1 hour. With a good fabric, 10GB switches - Netapp NAS iSCSI storage there is quite a bit you can do. ![]() It mostly comes down to how good your network is. As well as the other 5TB of backups we have. We also backup the 800GB data warehouse across the network using Litespeed with no problems. I can, however, tell you that I currently backup a 450GB database across the network to our backup share using Litespeed with no problems. I am not real sure I understand your problem - or in fact, what your question really is. Thanks for pointing us in the right direction! SET = '\\SERVER1\- BAK = '\\SERVER1\- BAK = '\\SERVER1\- BAK DATABASE toĭISK = INIT, NOUNLOAD, Name='KRONOS Backup', Noskip, noformat (Current script which breaks backup into 3 three parts) We probably also need to be cognitive of the timing off all applications that backup to the DR area and also backbone backup timings. Our biggest database backup we now will have to now break up into 4 parts.Īnyone have any other ideas of what else could be our issues? (compressed area on 5.250, backup verification switch yes/no, timings, etc). We need to start thinking of a way to efficiently do that for all systems WITH automatic retention in mind AND proper notifications. We are going to have to start just pushing 10Gb chunks across the network at a time at the most. It seems anything over 15Gb will give us issues moving across over to the backup server. We have been having issues building the new SQL2008 with imbedded TSQL to split the backup files into three separate parts < 15GB each. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |