Jump to content

Direcway on Linux


Recommended Posts

I notice i get alot slower speeds on linux than i do on Windows? Any Tweaks for this os?

:::.. Download Stats ..:::

Connection is:: 1398 Kbps about 1.4 Mbps (tested with 2992 kB)

Download Speed is:: 171 kB/s

Tested From:: https://testmy.net/ (server2)

Test Time:: Thu Dec 22 2005 11:33:29 GMT-0600 (CST)

Bottom Line:: 25X faster than 56K 1MB download in 5.99 sec

Diagnosis: Awesome! 20% + : 51.96 % faster than the average for host (direcpc.com)

Validation Link:: https://testmy.net/stats/id-FPJR0ASXH

Link to comment
Share on other sites

hey justinlay, try this

quote from http://www.psc.edu/networking/projects/tcptune/#Linux

"Tuning TCP for Linux 2.4 and 2.6

The maximum buffer sizes for all sockets can be set with /proc variables:

/proc/sys/net/core/rmem_max      - maximum receive window

/proc/sys/net/core/wmem_max      - maximum send window

These determine the maximum acceptable values for SO_SNDBUF and SO_RCVBUF (arguments to setsockopt() system call). The kernel sets the actual memory limit to twice the requested value (effectively doubling rmem_max and wmem_max) to provide for sufficient memory overhead.

The per connections memory space defaults are set with two 3 element arrays:

/proc/sys/net/ipv4/tcp_rmem      - memory reserved for TCP rcv buffers

/proc/sys/net/ipv4/tcp_wmem      - memory reserved for TCP snd buffers

These are arrays of three values: minimum, default and maximum that are used to bound autotuning and balance memory usage while under global memory stress.

The following values would be reasonable for path with a 4MB BDP (You must be root):

      echo 2500000 > /proc/sys/net/core/wmem_max

      echo 2500000 > /proc/sys/net/core/rmem_max

      echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem

      echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

All Linux 2.4 and 2.6 versions include sender side autotuning, so the actual sending socket buffer (wmem value) will be dynamically updated for each connection. You can check to see if receiver side autotuning is present an enabled by looking at the file:


If it is present and enabled (value 1) the receiver socket buffer size (rmem value) will be dynamically updated for each connection. If it is not present you may want to get a newer kernel. Generally autotuning should not be disabled unless there is a specific need, e.g. comparison studies of TCP performance. If you do not have autotuning (Linux 2.4 before 2.4.27 or 2.6 before 2.6.7) you may want to set the default tcp_rmem value (the middle value) to a more accurate estimate of the actual path BDP, to minimize possible interactions with other applications.

Do not adjust tcp_mem unless you know exactly what you are doing. This array determines how the system balances the total network memory usage against other memory usage, such as disk buffers. It is initialized at boot time to appropriate fractions of total system memory.

You do not need to adjust rmem_default or wmem_default (at least not for TCP tuning). These are the default buffer sizes for non-TCP sockets (e.g. unix domain sockets, UDP, etc).

All standard advanced TCP features are on by default. You can check them by cat'ing the following /proc files:




Linux supports both /proc and sysctl (using alternate forms of the variable names - net.core.rmem_max) for inspecting and adjusting network tuning parameters. The following is a useful shortcut for inspecting all tcp parameters:

sysctl -a | fgrep tcp

For additional information on kernel variables, look at the documentation included with your kernel source, typically in some location such as /usr/src/linux-<version>/Documentation/networking/ip-sysctl.txt. There is a very good (but slightly out of date) tutorial on network sysctl's at http://ipsysctl-tutorial.frozentux.net/ipsysctl-tutorial.html.

If you would like to have these changes to be preserved across reboots, you can add the tuning commands to your /etc/rc.d/rc.local file.

Autotuning was prototyped under the Web100 project. Web100 also provides complete TCP instrumentation and some additional features to improve performance on paths with very large BDP.

Contributors: John Heffner

Checked for Linux 2.6.13, 9/19/2005 "

VanBuren :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...