How To Quickly Transfer Large Files Over Network In Linux And Unix

Transfer Large Files Over Network In Linux

12 Responses

  1. IJK says:

    It would be nice if you could publish some figures comparing this mechanism against FTP or SCP in the same network.

    • SK says:

      Good point. I will try.

      • hi.itsme says:

        And how about writing a bash script to completely automatize so that you can fetch the files using only the client machine. For example, I have generated my data on my computing server (I maintain it) and I want to transfer important data to results directory of a production machine where analyse it for further usage. You once run the script and forget about it.

  2. ppnman says:

    don’t know dude…. security is so important to me. I prefer rsync -ravz /path/to/source/files/ destination-ip:/path/on/destiny
    fast,secure and easy 😁

  3. William Chipman says:

    Tried this from Oracle 6 box to Centos 7 box and ran into a few issues:
    1. Had to install the EPEL repository to get pv.
    2. Had to add the port to the iptables / firewall-cmd on the target system. This could also be used to control possible source systems for improved security.
    3. command name was changed to “nc” on both systems, not netcat.
    After those changes, worked as advertised and very quick.

  4. Benjamin Furstenwerth says:

    Use the -w flag with netcat and specify a time in seconds. This will close the connection on time out. If you don’t want visual status, this would avoid having to install pv. I use pv for dd frequently, so it’s a non issue. Timing out the transfer is good for automation and provided you have a stable network, it shouldn’t prematurely fail…. Otherwise set a larger time out.

  5. LinuxLover says:

    I prefer SCP. I don’t need to setup anything on the client end everytime I want to do a transfer and it gives me transfer status built in.

  6. stasman says:

    Stats? Netcat vs FTP vs rsync vs SCP

  7. BG says:

    I have a hard time believing this would be significantly faster than rsync, unless there’s something wrong with one of your systems. On any modern system, for large files, the bottleneck should be the network bandwidth (or cpu if the pc is from the past century). For a huge amount of tiny files maybe tar would help. I’ve transferred large files over network many times using rsync, scp, samba, nfs, etc. and the bottleneck is always the network bandwidth (even on a gigabit network).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.