I am facing an issue with the NAT translation. Once a ftp packet reaches the destination from the peer the source ip get change to Fusion hub interface ip.
The FH will NAT traffic by default when it passes through its WAN interface, you can change this behaviour by enabling IP forwarding on the WAN configuration and disabling NAT, although at that point you would need to consider what then does the NAT for any internet bound traffic passing via your FH and what the upstream router does and how it learns about the subnets behind the FH / PepVPNs…
You may want to consider adding a second interface to the FH and use that to route traffic to the other internal networks, this is typically how we do it but also we hang a small virtual router/firewall (in our case normally a Fortigate VM, but I know plenty of people doing this with something like pfSense or Vyos) off the LAN interface.
I use opnsense for this and like it a lot.
currently FH is deployed on Azure. I have tried to switch the ipforwarding option that you suggested but the source ip NAT issue still persist.
1.Currently only single WAN interface (10.1.1.4/24)is available on my FH. Is it required to add second interface as LAN?
2. When I added second interface, it stopped pinging from azure network to Branch network. But from branch I was able to ping to the Azure VM ip
I have added a 2nd interface from another subnet 10.1.5.4/24 as LAN and added the route from azure to route traffic to 192.168.165.0/24 to route via 10.1.5.4
still it didn’t worked
So did you acutally disable NAT or just enable IP forwarding?
I just realised the screenshot has that button still ticked but that it not what I wrote…
There is also the point about how you would then need to consider how traffic is routed / NATd after if leaves the FusionHub WAN, and how that upstream router learns about the subnets at the branch office which at present are hidden from it by the NAT performed by the FH.
If you add a second interface you can then configure it with an IP, and after that you still need to figure out how you are going to route traffic between the different subnets.
There are a few examples of the configuration for the Peplink side of this scenario in the FusionHub deployment documents, not sure what else may be required on the Azure side as I do not use Azure for anything.
Yes, I enabled ip forwarding and unchecked the NAT option.
note: i have another ipsec vpn tunnel which connects to the FH, but once i change the NAT to ipforwarding - from other end i am not able to ping azure side machines
Once I enabled IP forwarding and disabled NAT- all my branch devices not able to communicate to azure VM via FH
Thats most likely becuase your Azure VM doe snot have a route back to those remote peer IPs via the Fusionhub WAN. Try adding a static route on the Azure VM for them and see if that fixes it.
We already have static route on the vnet level on Azure. routing the remote peer ips via Fusionhub WAN. but it didn’t solve the issue
Still smells like a routing issue to me (or possibly some firewall or ACL rules are incorrect as once you turn off NAT the end hosts IP will be seen as the traffic src/dst).
In 3 you say “remote peer to azure not able to ping” - ping what exactly, is the VPN tunnel up?
For now I’d tear down that IPSEC tunnel and use just the SF tunnel to keep traffic paths simple whilst trying to find where the problem lies exactly.
If you test every hop end to end from both directions what results do you get, where does the problem actually begin (we assume that all hosts here will respond to ping!).
When NAT is disabled can the VM that runs the FTP service ping:
- The vnet2 router i.e. its local gateway (is that 10.2.3.1?)
- The vnet1 router i.e. the FHs gateway (is that 10.1.1.1?)
- The FH WAN IP (10.1.1.4)
- The Branch Office LAN IP (192.168.165.1) - if you cant ping this can you ping both ends of the PepVPN tunnels?
Reverse those steps from the client behind the branch router, where does traffic stop flowing exactly?
Once you have that basic information, what is in the routing tables for:
- The branch office router.
- The FH.
- Vnet1 router
- Vnet2 router.
Do all the expected routes exist where they are needed, and are the expected nexthops correct, remember in routing that generally speaking the most specific prefix wins too.
- Check NAT is not enabled on the PepVPN profiles.
- Branch router should know that 10.2.3.0/24 (or a covering route) is reachable via the PepVPN tunnel.
- Outbound policy may be needed at the Branch router, is that set correctly - check order of rules etc.
- Any firewall rules configured that may drop the traffic if the SRC/DST IP is not that of the FH WAN?
• In 3 you say “remote peer to azure not able to ping” - ping what exactly, is the VPN tunnel up?- Yes tunnel is up. Can ping only FH WAN ip 10.1.1.4
The Branch Office LAN IP (192.168.165.1) - not pining from azure even when the NAT is enabled. but from 10.2.3.5 it can reach client pc 192.168.165.6
When NAT is disabled ping: from 192.168.165.6
• The FH WAN IP (10.1.1.4) – Yes Pingable
• The Branch WAN ip (192.168.165.1) — yes pingable
• VM running FTP (10.2.3.5) — not pingable.
When NAT is disabled ping: from FH (10.1.1.4)
• VM running FTP (10.2.3.5) — Yes pingable
• Client in branch (192.168.185.30) – Yes Pingable
what is in the routing tables for:
• The branch office router-- default Peplink settings- only out bound policy defined to route the traffic 10.2.0.0/16 via PEPVPN tunnel.
• The FH ---- Not added any static route.
• Vnet1 — Not added any static route.
• Vnet2 route table-- address prefix 192.168.0.0/16 next hop 10.1.1.4
• Vnet1 and Vent 2 – network peering is enabled (subnets inside both vnet’s can communicate each other)
• Check NAT is not enabled on the PepVPN profiles. - Not enabled
• Branch router should know that 10.2.3.0/24 (or a covering route) is reachable via the PepVPN tunnel.
• Outbound policy may be needed at the Branch router, is that set correctly – Yes, outbound policy defined to route the traffic 10.2.0.0/16 via PEPVPN tunnel.
• Any firewall rules configured that may drop the traffic if the SRC/DST IP is not that of the FH WAN? no
• Did from branch office lan 192.168.165.1 to 10.2.3.5
Hop went till 10.1.1.4 and starts request timeout.
• Did from VM 10.2.3.5 to 192.168.165.6
Hop completes successfully.
When enabled ip forwarding without NAT, while doing traceroute from remote peer to 10.2.3.5 traffic reaches till WAN ip of FH 10.1.1.4 and after that packet timeout.
Did ping from 10.1.1.4 to 10.2.3.5 it reaches the destination.
Tried static route from FH to internal network, but didn’t solve the issue
Any idea why the traffic from remote peer get drops in FH?