network reliability
-
- Grand Pooh-Bah
- Posts: 6722
- Joined: Tue Sep 19, 2006 8:45 pm
- Location: Portland, OR
- Contact:
network reliability
in your experience, what's the failure rate for large files transferred by http, ftp, or windows share? by large, i mean at least a couple gigabytes. by failure, i mean at least one wrong bit.
recent events have made me believe that the failure rate is something abysmal, on the order of 10 to 20 percent. is that true for everyone, or am i just really unlucky?
in order to comment, you have to be testing your files somehow. checksums would do it. using the file would also do, except for media files.
recent events have made me believe that the failure rate is something abysmal, on the order of 10 to 20 percent. is that true for everyone, or am i just really unlucky?
in order to comment, you have to be testing your files somehow. checksums would do it. using the file would also do, except for media files.
-
- Tenth Dan Procrastinator
- Posts: 4891
- Joined: Fri Jul 18, 2003 3:09 am
- Location: San Jose, CA
This is why people used 15 or 20 MB rars... What sort of network are you trying to send files over? It really depends on whether it's wireless, or wired. It depends on the network traffic and packet retries and all too. I found sending rars over inet1 would be very poor. inet2 rarely if ever had errors.
I'd strongly recommend not to use windows shares to send large files in general.
I'd strongly recommend not to use windows shares to send large files in general.
I think rsync might work better on an unreliable connection, because it uses "the rsync algorithm" to only blah blah parts that change blah blah.
So there's solid reasoning for you. I've done it before under Windows (as the client) and it worked fine, but I wasn't on an unreliable network. I don't know how well it will run with Windows as both host and client OS.
So there's solid reasoning for you. I've done it before under Windows (as the client) and it worked fine, but I wasn't on an unreliable network. I don't know how well it will run with Windows as both host and client OS.
-
- Tenth Dan Procrastinator
- Posts: 4891
- Joined: Fri Jul 18, 2003 3:09 am
- Location: San Jose, CA
If you are really really really annoyed, I know there's a method to checksum files block by block. Then you'd send that checksum to the host, have them verify which blocks are broken and then write a patch file with the the blocks to fix.
I used some program to help a guy fix a movie he downloaded from somewhere in .avi form. I got my copy in checksummed rars, so I was pretty damn sure my copy was good. I have no idea what the name of the program is now though.
I used some program to help a guy fix a movie he downloaded from somewhere in .avi form. I got my copy in checksummed rars, so I was pretty damn sure my copy was good. I have no idea what the name of the program is now though.
-
- Tenth Dan Procrastinator
- Posts: 4891
- Joined: Fri Jul 18, 2003 3:09 am
- Location: San Jose, CA
Here's a rather lame way to do it. The method I described above doesn't make you split the file into pieces for the correction to take place. I'll keep looking for something. OR, you could just write something in C yourself to perform this operation. I'm not sure if it would take you longer to write this code or find it on the internet.
I'm not going to read vinny's link, and instead just comment on rsync. It does a compare by blocks, even for a huge file. When I downloaded some linux distro a few years ago, their recommended way to get the latest was "download this old iso we keep lying around, and rsync to the new one". So, through whatever magic it has, it should be able to determine which parts of the file differ and update those parts. I don't know what sort of rules it has as far as block alignment, which is why I call it magic.