Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Question on MLPPP with a pair of 1760s 2

Status
Not open for further replies.

kmcferrin

MIS
Joined
Jul 14, 2003
Messages
2,938
Location
US
This is my first attempt at setting up MLPPP, but I think that I have most of the basics covered with the exception of one question:

Since this is a point to point connection over a pair of T1s I need to set "service-module t1 clock source internal" on one of the routers. Do I need this statement to be in the config for the multilink interface, one of the serial interfaces, or both of the serial interfaces?
 
Set this on one router on both serial interfaces.....not on the multilink interface.
 
If not already enabled you might want to turn on IP CEF on both routers.
 
The clock needs to be configured on the physical interface so on the actual controller interface:

controller t1 0/0
clock source internal

On the remote end the default of 'clock source line' can be used.

Can I ask why you want to use Multilink though? The speed is adequate so you won't have serialisation issues for voice so you don't need LFI. Is you want to have a single IP link between sites? Have you not considered using a routing protocol that will load-balance over these equal-cost links? i.e. have 2 point-to-point IP links?

Andy
 
With equal cost paths wont you be limited to only 1.544 max speed but with the MLPP you will have max speed at 3.088?
 
Thanks for the help.

Andy - I'm actually pretty new to IOS but assumed that multilink was a good way to go. There is no voice on this circuit, only IP data. Right now there is a single T1 connected between the two sites using PPP. Our problem is that we need more bandwidth between HQ and the second site. A second T1 has been provisioned and only needs to be plugged in. It seemed to me that this is exactly what MLPPP was designed for (though it seems to be more commonly used for slower connections like fractional T1s and ISDN).

It's my understanding that MLPPP handles all of the load balancing for you, and will keep the link up even if one of the circuits goes down. Are there disadvantages to MLPPP that would be avoided by using some other method to load-balance the links?
 
I think you are on the right path. Everything you said is correct. I am ussing the same setup right now and it works great.
 
Thanks JOAMON. With regards to CEF, I understand its benefits over the default fast switching, but does that actually make a difference if you are using MLPPP? Since there will in effect be only one path to the remote host that will always be used, CEF versus Fast Switching shouldn't make a difference. I am under the impression that MLPPP handles the load balancing in a more efficient manner than fast switching to begin with (and in fact that is one of the advantages of MLPPP over using the routing protocol to load balance the interface).
 
JOAMON

Multilink doesn't increase the clock speed of a link so no you don't actually get 3.088Mbps; you get 2 circuits operating at 1.55Mbps. What you do get is load sharing over the physical interfaces. Packets can be fragmented by the sending router and the fragments sent down either interface to the remote router that will recombine the fragments for tranmission. In theory this increases the speed, although not all packets are fragmented.........

MLPPP is primarily designed for Bandwidth-on-Demand over dialup ISDN where additional 64Kbps channels can be dynamically added to a logical interface to increase the overall bandwidth. With 2 fixed T1's I wouldn't bother with MLPPP, and to be honest I wouldn't even bother with PPP. I would have 2 HDLC links configured as point-to-point IP subnets and let the dynamic routing protocol (OSPF or EIGRP) take care of the load balancing of traffic. Configuration would be much simpler as well:

controller t1 0/0
clock source internal
controller t1 0/1
clock source internal
!
interface serial 0/0:15
ip address 10.1.1.1 255.255.255.252
!
interface serial 0/1:15
ip address 10.1.1.5 255.255.255.252
!
router ospf 10
network 10.1.1.1 0.0.0.0 area 0
network 10.1.1.5 0.0.0.0 area 0


Equivilent config on the remote router etc.

In reality though either way will give you the same results - you will have increased bandwidth between the two sites. MLPPP does add the fragmenting of packets over multiple links, whether this gives any performance increase with two T1's I don't know but I wouldn't think so. The Layer-2 overhead of MLPPP/PPP and HDLC are the same so you won't loose anything for using PPP, my personal preference would be HDLC though as its there by default and doesn't require any additional config.....

HTH

Andy

 
Before I put both interfaces in the MLPP I did a speed test and it came back as 1.5....after both T1 in MLPP tested at 3.0. So are you saying that I will get the full bandwidth by using seperate links using OSPF or EIGRP???

Test running..........
** Speed 2804(down)/2823(up) kbps **
(At least 56 times faster than a 56k modem)
Logged result.
Finish.
 
JOAMON

I am sure I tested this a while ago and with a 1-to-1 connection - i.e. PC to PC I didn't see any speed increases when I created a Multilink Bundle of 2 Links running at 2048Kbps each, as opposed to running 2 separate links and letting OSPF load balance. What did you use to test?

I will set this up again on a couple of routers and get some throughput figures, but if you are saying it works then kmc I would go with this as it looks like it will give you overall increased bandwidth.

Andy
 
Andy - I am interested in seeing your results when you have them. Also, I wonder if packet fragmentation may make the difference in bandwidth?

I did some research through a big stack of Cisco whitepapers on this and got some good info. The most informational paper was titled "Alternatives for High Bandwidth Connections Using Parallel T1/E1 Links." It had some charts comparing pros and cons of various alternatives (hardware IMUX, load balancing, MLPPP, and ATM Inverse Multiplexing). The biggest differences between MLPPP and load balancing are:

MLPPP preserves packet order where the load balancing solution does not. This may not be an issue if your application isn't sensitive to this.

MLPPP conserves IP addresses (one per virtual interface as opposed to one per physical interface).

MLPPP is more CPU intensive than load balancing and may decrease switching performance.

MLPPP makes it a little easier to manage layer 3 services because you only have to manage the single ML interface. This is probably a bigger issue with larger bundles.

Load balancing supports HSSI and IOS QOS, whereas MLPPP does not.

On the issue of packet fragmentation, all I could was was the sentence "In many cases fragmentation can increase the efficiency of the NxT1 link." I imagine that's possible if you have a significant variance in packet sizes going across the link (some very small, some large). Of course, packet fragmentation also causes more higher CPU utilization, possibly slowing things down. I guess the best tack would be to get some CPU utilization numbers before bringing up the MLPPP and then testing it in a couple of different configs to get the best performance/lowest CPU utilization.

This has been a very educational day. Stars for both of you.
 
I just used my ISP's speedtest site. My setup is as follows:

HQ 2611 w/ 2 T1 in multilink bundle to ISP for 3meg internet. This connects to core 2811 router/firewall with 2 T1 p2p in multilink config to remote office. I tested from remote office to the internet and received the following results.

Test running..........
** Speed 2804(down)/2823(up) kbps **
(At least 56 times faster than a 56k modem)
Logged result.
Finish.

Also my config is very basic on this. I just have IP CEF enabled and everything I found on configuring the multilink interface had the following:
ppp multilink fragment disable
 
I have just set this up between a 3620 and a 2610XM using WIC-1T and WIC-2T modules. I used IPerf to test the throughput (in TCP mode) and with parallel point-to-point links both running IP at a clock speed of 2000000, I got the following:

C:\Network Tools\iperf>iperf -c 192.168.100.60
------------------------------------------------------------
Client connecting to 192.168.100.60, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[884] local 192.168.225.1 port 1491 connected with 192.168.100.60 port 5001
[ ID] Interval Transfer Bandwidth
[884] 0.0-10.1 sec 2.37 MBytes 1.97 Mbits/sec


I then reconfigured the routers to use PPP and a Multilink interface and got the following with the same test:

C:\Network Tools\iperf>iperf -c 192.168.100.60
------------------------------------------------------------
Client connecting to 192.168.100.60, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[884] local 192.168.225.1 port 1503 connected with 192.168.100.60 port 5001
[ ID] Interval Transfer Bandwidth
[884] 0.0-10.0 sec 4.63 MBytes 3.87 Mbits/sec

C:\Network Tools\iperf>

So I will retract my previous statement and say this does improve overall throughput. I must have tested this on lesser routers previously (2500's probably?) either that or I used a different test.

Anyway it does work so kmcferrin go with it!!!!

HTH

Andy
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top