When I'm imaging one machine, I'm getting a download rate of 8.5MB/s, and a variable write rate, but it is a compressed image. When I put 5 machines on, I'm getting about 1MB/s download rate, which is too slow for my liking, especially with a 89GB Image file.
The server is a new (2014 - July) mac mini server, 8GB RAM, 2 x SSD 256GB, Mirror RAID. Deploy Studio 1.6.11
Repository is stored on another server, on a different subnet which might cause issues? Shared via CIFS.
The Server is on GigaBit LAN so I'm trying a 10MB/s stream rate to start with.
The computer suite I'm testing on are Mac Mini's from 2011, 4GB RAM, 500GB SATA HD's.
To get the Client Write Disk Speed I put in this command in the terminal:
sudo dd if=/dev/zero of=/tmp/test bs=1024 count=1048576
Password:
1048576+0 records in
1048576+0 records out
1073741824 bytes transferred in 18.062331 secs (59446470 bytes/sec)
This converts to 59MB/s
Test 1
clients: 5Stream Data Rate: 10MBs
Client Disk Speed: 43MBs
Conclusion: Got about 20 fails on each computer before 2% of a 89GB compressed image.
Realised I didn't scan for restore. At this point I scanned the image for restore using Disk Utility on Server.
Test 2
clients: 2Stream Data Rate 10MBs
Client Disk Speed: 42MBs
Conclusion: Got about 7 fails on both machines before 2%
Test 3
clients: 16Stream Data Rate 6MBs
Client Disk Speed: 38MBs
Conclusion: Got 2 fails in the first 2%, but 7 more fails in the next 6%. Interestingly, 7 out of the 16 started doing the recovery partition first, and as it was so quick, got onto the main image after the rest were over 2%. This might explain the packet loss. Getting a consistent 13 fails per 8%. Not sure if this works out at less then 20% or more? Seems to have failed to reimage 3 out of 16, which is good.
For the next test I wanted to see if having the images on the server itself would speed things up. I setup the repository in NetBootSP0 in /Library/Netboot/NetBootSP0
Test 4
Clients: 4Stream Data Rate 6MBs
Client Disk Speed: 38MBs
Conclusion: 1 machine out of 4 got 20 fails before 2%, but then only got 2 more for the whole restore. Must have been a network spike or something. The others report very little (smallest so far) fails! Yay. Think I'll try and speed it up for next test.