Page 1 of 1

network reliability

Posted: Fri Feb 25, 2005 11:59 pm
by Jonathan
in your experience, what's the failure rate for large files transferred by http, ftp, or windows share? by large, i mean at least a couple gigabytes. by failure, i mean at least one wrong bit.

recent events have made me believe that the failure rate is something abysmal, on the order of 10 to 20 percent. is that true for everyone, or am i just really unlucky?

in order to comment, you have to be testing your files somehow. checksums would do it. using the file would also do, except for media files.

Posted: Sun Feb 27, 2005 9:14 am
by quantus
This is why people used 15 or 20 MB rars... What sort of network are you trying to send files over? It really depends on whether it's wireless, or wired. It depends on the network traffic and packet retries and all too. I found sending rars over inet1 would be very poor. inet2 rarely if ever had errors.

I'd strongly recommend not to use windows shares to send large files in general.

Posted: Sun Feb 27, 2005 9:19 am
by Jonathan
wired, intranet. these damn ghost images are 1-20GB and are just constantly dying on me, man.

Posted: Mon Feb 28, 2005 6:03 am
by bob
Can you rsync?

Posted: Mon Feb 28, 2005 6:50 am
by Jonathan
i am using windows shares. is rsync going to be better? why the hell do windows shares suck?

Posted: Tue Mar 01, 2005 1:38 am
by bob
I think rsync might work better on an unreliable connection, because it uses "the rsync algorithm" to only blah blah parts that change blah blah.

So there's solid reasoning for you. I've done it before under Windows (as the client) and it worked fine, but I wasn't on an unreliable network. I don't know how well it will run with Windows as both host and client OS.

Posted: Tue Mar 01, 2005 1:49 am
by Jonathan
But I have but one large 20 GB file. Surely the rsync algorithm doesn't work on blocks, just files, right?

Posted: Tue Mar 01, 2005 2:51 am
by quantus
If you are really really really annoyed, I know there's a method to checksum files block by block. Then you'd send that checksum to the host, have them verify which blocks are broken and then write a patch file with the the blocks to fix.

I used some program to help a guy fix a movie he downloaded from somewhere in .avi form. I got my copy in checksummed rars, so I was pretty damn sure my copy was good. I have no idea what the name of the program is now though.

Posted: Tue Mar 01, 2005 3:03 am
by quantus
Here's a rather lame way to do it. The method I described above doesn't make you split the file into pieces for the correction to take place. I'll keep looking for something. OR, you could just write something in C yourself to perform this operation. I'm not sure if it would take you longer to write this code or find it on the internet.

Posted: Tue Mar 01, 2005 3:15 am
by quantus
This seems to be the same article, but much more evolved. Take a look at section 5 for real answers to your misery.

Posted: Tue Mar 01, 2005 3:24 am
by VLSmooth
Here's a way the anime community dealt with the problem years ago:

ZIDRAV ( sourceforge link )

Posted: Tue Mar 01, 2005 5:21 am
by Jonathan
How does anything get done on the internet if all the bits are corrupted?

Posted: Tue Mar 01, 2005 5:50 am
by quantus
a lot of retries and redundancy

Posted: Thu Mar 03, 2005 2:59 am
by bob
I'm not going to read vinny's link, and instead just comment on rsync. It does a compare by blocks, even for a huge file. When I downloaded some linux distro a few years ago, their recommended way to get the latest was "download this old iso we keep lying around, and rsync to the new one". So, through whatever magic it has, it should be able to determine which parts of the file differ and update those parts. I don't know what sort of rules it has as far as block alignment, which is why I call it magic.