Juniper MX : BERT Testing a Loopback

So you’ve been receiving errors on your Juniper router, and asked your local provider to drop a loop on the cable facing your box at some point in the cable run in order to divide and conquer. Now what. We probably want to generate some traffic to determine if any of the equipment involved in the loop is responsible for that pesky CRC you’re trying to find.

You can follow the steps outlined here to generate some traffic.
https://www.juniper.net/techpubs/en_US/junos12.3/topics/topic-map/ethernet-fast-and-gigabit-loopback-testing.html

I modified this somewhat in that I created a routing-instance type virtual router, placed my interface in that routing-instance and performed the tests noted above so that you’re not mucking about in the default routing-table should you mess something up with your IP allocation, etc.

user@Juniper> show configuration routing-instances LINKTEST
instance-type virtual-router;
interface xe-2/1/3.0;

user@Juniper> show configuration interfaces xe-2/1/3
description “Test Interface”;
unit 0 {
family inet {
address 1.1.1.1/30 {
/* this must be the MAC address of your local interface so you’ll accept the traffic */
arp 1.1.1.2 mac 2c:6b:f5:77:66:88;
}
}
}

user@Juniper> ping 1.1.1.2 source 1.1.1.1 routing-instance LINKTEST size 65000 count 200 rapid

In practice – this command netted about 67Mbps of traffic dropped onto the link, looping about until the TTL expired. By using rapid – you’re forcing the ping to continuously send traffic before the previous request has expired, effectively doubling the amount of data on the link. Increasing your ping size, obviously results in a larger amount of data being dropped on the link per ping.

Since this wasn’t quite enough traffic – a suggestion (not mine) was to drop into the shell and queue up a bunch of jobs to do the same thing. I thought it was a great idea – and now I’m sharing it with you.

user@Juniper> start shell
% ping
usage: ping [-ACNQRSXadfnqrvw] [-c count] [-i wait] [-g loose-gateway]
[-l preload] [-p pattern] [-s packetsize] [-t tos]
[-F source] [-G strict-gateway] [-I interface]
[-Ji|r|4|6|I interface|U routing-instance|L logical-router]
[-T ttl] host

% % ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null &

etc…

% jobs
[1] + Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[2] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[3] Exit 2 ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[4] Exit 2 ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[5] Exit 2 ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[6] Exit 2 ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[7] Exit 2 ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[8] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[9] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[10] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[11] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[12] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[13] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[14] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[15] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[16] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[17] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[18] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[19] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[20] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[21] Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null
[22] – Running ping -Jr -c 500 -s 65000 -F 1.1.1.1 -JU LINKTEST 1.1.1.2 > /dev/null

Now, if you execute the above command 25 times, I found this generates about 850Mbps of traffic. You probably want to be a bit careful creeping much past this if you want to push bits for a long amount of time as this traffic is generated from the internal em0.0 interface connection to the routers backplane and saturating the routing-engine/backplane connection on a production router is probably not a great idea for all your routing protocols and such.

user@Juniper> show interfaces em0 | match Speed
Type: Ethernet, Link-level type: Ethernet, MTU: 1514, Speed: 1000mbps

Leave a comment