Azure Networking Bandwidth: Public IP vs Peered Networking

3 months ago

We have a application setup which might be familiar to you; A cloud service in a classic virtual network (v1) which communicates with a database in an ARM virtual network (v2). Ideally we would like both of these services in a single network, but are restricted from doing so due to the deployment models. We had a discussion which involved performance, security and ideal topologies, however this post will solely focus on performance.

Is there a difference in latency and bandwidth when they are both hosted in the same region?

Test setup

To reflect the setup we have for our application, two VMs were provisioned in North Europe.

Source

  • A3 (Large) Windows Cloud service
  • Classic Virtual Network

Destination

  • DS13v2 Linux Virtual machine
  • ARM Virtual Network peered to the Classic VNet

Traceroute

I first wanted to test the latency and number of hops between the VMs. ICMP is not available for this test as we are hitting a public IP, however we can use TCP by using nmap.

PS C:\Users\user> nmap -sS -Pn -p 80 --traceroute 13.xx.xx.xx
HOP RTT ADDRESS
1 ... 7
8 0.00 ms 13.xx.xx.xx
PS C:\Users\user> nmap -sS -Pn -p 80 --traceroute 10.xx.xx.xx
HOP RTT ADDRESS
1 0.00 ms 10.xx.xx.xx

We can see that there are 8 hops over the public IP, and as we expect only a single hop over the peered network. Both routes are still extremely fast with negligible ping times. This confirms my collegues suspicions; despite connecting to a public address the traffic probably never leaves the datacenters perimeter network.

Bandwidth (iperf3)

To measure the bandwidth available between the VMs I’m using iperf3 which is cross platform. The test is run from the windows machine as a client and flows to the iperf server hosted on the linux box.

# Public IP test
.\iperf3.exe -c 13.xx.xx.xx -i 1 -t 30
# Peered network test
.\iperf3.exe -c 10.xx.xx.xx -i 1 -t 30


SecondsPublic IPPeered
1985996
2951947
3975976
4936956
5989962
6958965
7967962
8959926
9964985
10961948
11968953
12960980
13949957
14976966
15960949
16966972
17959954
18966975
19961969
20964963
21965962
22962933
23962993
24958961
25967958
26963958
27961956
28963970
29965962
30962963

Surprisingly, both achieve the desired bandwith (1Gbps) for the selected VM sizes.

I was still curious if the performance profile was the same when upgrading both VMs to support 10Gbps networking. For this test both machines were upgraded to the DS14v2 VM size. To maximise the bandwidth I used iperfs -P switch to run concurrent workers. The buffer size was also increased to see the effect it has on the bandwidth.

#Public IP with 4 processes
.\iperf3.exe -c 13.7xx.xx.xx -i 1 -t 30 -P 4
#Peered network with 4 processes
.\iperf3.exe -c 10.xx.xx.xx -i 1 -t 30 -P 4
#Public IP with 4 processes and 32MB buffer
.\iperf3.exe -c 13.xx.xx.xx -i 1 -t 30 -P 4 -w 32MB
#Peered network with 4 processes and 32MB buffer
.\iperf3.exe -c 10.xx.xx.xx -i 1 -t 30 -P 4 -w 32MB


TestBandwidth (Mbps)
Public IP2480
Peered2630
Public IP (32MB)3230
Peered (32MB)2710

As expected, with the default values the peered network performed better although the difference was marginal. More surprisingly, the public network had a high thoroughput when the buffer size was increased and despite running the test multiple times I am unable to explain why.

For our workload and use case, I can say the performance difference between the two approaches is irrelevant. If you are evaluating whether you might gain network performance by switching to peered networking then I hope these numbers can help guide you. I would recommend running a similar test if you are choosing different VM sizes or workload.

Posted in: operations
Tagged with: azure


Comments