X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pve-firewall.adoc;h=7089778ccf86e304e8adb987fa370fae6bac5750;hp=6019f95c0295d940ba59fbdc604b8203d63ebabf;hb=refs%2Fheads%2Fmaster;hpb=7e2fdb3dfdd79fb37449fd4e69f8e4c605e67361 diff --git a/pve-firewall.adoc b/pve-firewall.adoc index 6019f95..9fb4e46 100644 --- a/pve-firewall.adoc +++ b/pve-firewall.adoc @@ -1,8 +1,7 @@ +[[chapter_pve_firewall]] ifdef::manvolnum[] -PVE(8) -====== -include::attributes.txt[] - +pve-firewall(8) +=============== :pve-toplevel: NAME @@ -20,15 +19,12 @@ include::pve-firewall.8-synopsis.adoc[] DESCRIPTION ----------- endif::manvolnum[] - ifndef::manvolnum[] {pve} Firewall ============== -include::attributes.txt[] +:pve-toplevel: endif::manvolnum[] - ifdef::wiki[] -:pve-toplevel: :title: Firewall endif::wiki[] @@ -39,7 +35,7 @@ containers. Features like firewall macros, security groups, IP sets and aliases help to make that task easier. While all configuration is stored on the cluster file system, the -`iptables`-based firewall runs on each cluster node, and thus provides +`iptables`-based firewall service runs on each cluster node, and thus provides full isolation between virtual machines. The distributed nature of this system also provides much higher bandwidth than a central firewall solution. @@ -78,16 +74,17 @@ You can configure anything using the GUI (i.e. *Datacenter* -> *Firewall*, or on a *Node* -> *Firewall*), or you can edit the configuration files directly using your preferred editor. -Firewall configuration files contains sections of key-value +Firewall configuration files contain sections of key-value pairs. Lines beginning with a `#` and blank lines are considered -comments. Sections starts with a header line containing the section +comments. Sections start with a header line containing the section name enclosed in `[` and `]`. +[[pve_firewall_cluster_wide_setup]] Cluster Wide Setup ~~~~~~~~~~~~~~~~~~ -The cluster wide firewall configuration is stored at: +The cluster-wide firewall configuration is stored at: /etc/pve/firewall/cluster.fw @@ -95,13 +92,13 @@ The configuration can contain the following sections: `[OPTIONS]`:: -This is used to set cluster wide firewall options. +This is used to set cluster-wide firewall options. include::pve-firewall-cluster-opts.adoc[] `[RULES]`:: -This sections contains cluster wide firewall rules for all nodes. +This sections contains cluster-wide firewall rules for all nodes. `[IPSET ]`:: @@ -124,7 +121,7 @@ set the enable option here: ---- [OPTIONS] -# enable firewall (cluster wide setting, default is disabled) +# enable firewall (cluster-wide setting, default is disabled) enable: 1 ---- @@ -146,6 +143,7 @@ To simplify that task, you can instead create an IPSet called firewall rules to access the GUI from remote. +[[pve_firewall_host_specific_configuration]] Host Specific Configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -167,7 +165,7 @@ include::pve-firewall-host-opts.adoc[] This sections contains host specific firewall rules. - +[[pve_firewall_vm_container_configuration]] VM/Container Configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -203,10 +201,6 @@ Each virtual network device has its own firewall enable flag. So you can selectively enable the firewall for each interface. This is required in addition to the general firewall `enable` option. -The firewall requires a special network device setup, so you need to -restart the VM/container after enabling the firewall on a network -interface. - Firewall Rules -------------- @@ -237,8 +231,8 @@ Here are some examples: IN SSH(ACCEPT) -i net0 IN SSH(ACCEPT) -i net0 # a comment IN SSH(ACCEPT) -i net0 -source 192.168.2.192 # only allow SSH from 192.168.2.192 -IN SSH(ACCEPT) -i net0 -source 10.0.0.1-10.0.0.10 # accept SSH for ip range -IN SSH(ACCEPT) -i net0 -source 10.0.0.1,10.0.0.2,10.0.0.3 #accept ssh for ip list +IN SSH(ACCEPT) -i net0 -source 10.0.0.1-10.0.0.10 # accept SSH for IP range +IN SSH(ACCEPT) -i net0 -source 10.0.0.1,10.0.0.2,10.0.0.3 #accept ssh for IP list IN SSH(ACCEPT) -i net0 -source +mynetgroup # accept ssh for ipset mynetgroup IN SSH(ACCEPT) -i net0 -source myserveralias #accept ssh for alias myserveralias @@ -249,6 +243,7 @@ OUT ACCEPT # accept all outgoing packages ---- +[[pve_firewall_security_groups]] Security Groups --------------- @@ -273,7 +268,7 @@ Then, you can add this group to a VM's firewall GROUP webserver ---- - +[[pve_firewall_ip_aliases]] IP Aliases ---------- @@ -308,10 +303,10 @@ explicitly assign the local IP address ---- # /etc/pve/firewall/cluster.fw [ALIASES] -local_network 1.2.3.4 # use the single ip address +local_network 1.2.3.4 # use the single IP address ---- - +[[pve_firewall_ip_sets]] IP Sets ------- @@ -329,7 +324,7 @@ Standard IP set `management` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This IP set applies only to host firewalls (not VM firewalls). Those -IPs are allowed to do normal management tasks (PVE GUI, VNC, SPICE, +IPs are allowed to do normal management tasks ({PVE} GUI, VNC, SPICE, SSH). The local cluster network is automatically added to this IP set (alias @@ -359,7 +354,7 @@ Traffic from these IPs is dropped by every host's and VM's firewall. ---- -[[ipfilter-section]] +[[pve_firewall_ipfilter_section]] Standard IP set `ipfilter-net*` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -384,6 +379,7 @@ discovery protocol to work. ---- +[[pve_firewall_services_commands]] Services and Commands --------------------- @@ -409,6 +405,145 @@ If you want to see the generated iptables rules you can use: # iptables-save +[[pve_firewall_default_rules]] +Default firewall rules +---------------------- + +The following traffic is filtered by the default firewall configuration: + +Datacenter incoming/outgoing DROP/REJECT +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the input or output policy for the firewall is set to DROP or REJECT, the +following traffic is still allowed for all {pve} hosts in the cluster: + +* traffic over the loopback interface +* already established connections +* traffic using the IGMP protocol +* TCP traffic from management hosts to port 8006 in order to allow access to + the web interface +* TCP traffic from management hosts to the port range 5900 to 5999 allowing + traffic for the VNC web console +* TCP traffic from management hosts to port 3128 for connections to the SPICE + proxy +* TCP traffic from management hosts to port 22 to allow ssh access +* UDP traffic in the cluster network to ports 5405-5412 for corosync +* UDP multicast traffic in the cluster network +* ICMP traffic type 3 (Destination Unreachable), 4 (congestion control) or 11 + (Time Exceeded) + +The following traffic is dropped, but not logged even with logging enabled: + +* TCP connections with invalid connection state +* Broadcast, multicast and anycast traffic not related to corosync, i.e., not + coming through ports 5405-5412 +* TCP traffic to port 43 +* UDP traffic to ports 135 and 445 +* UDP traffic to the port range 137 to 139 +* UDP traffic form source port 137 to port range 1024 to 65535 +* UDP traffic to port 1900 +* TCP traffic to port 135, 139 and 445 +* UDP traffic originating from source port 53 + +The rest of the traffic is dropped or rejected, respectively, and also logged. +This may vary depending on the additional options enabled in +*Firewall* -> *Options*, such as NDP, SMURFS and TCP flag filtering. + +[[pve_firewall_iptables_inspect]] +Please inspect the output of the + +---- + # iptables-save +---- + +system command to see the firewall chains and rules active on your system. +This output is also included in a `System Report`, accessible over a node's +subscription tab in the web GUI, or through the `pvereport` command-line tool. + +VM/CT incoming/outgoing DROP/REJECT +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This drops or rejects all the traffic to the VMs, with some exceptions for +DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set +configuration. The same rules for dropping/rejecting packets are inherited +from the datacenter, while the exceptions for accepted incoming/outgoing +traffic of the host do not apply. + +Again, you can use xref:pve_firewall_iptables_inspect[iptables-save (see above)] +to inspect all rules and chains applied. + +Logging of firewall rules +------------------------- + +By default, all logging of traffic filtered by the firewall rules is disabled. +To enable logging, the `loglevel` for incoming and/or outgoing traffic has to be +set in *Firewall* -> *Options*. This can be done for the host as well as for the +VM/CT firewall individually. By this, logging of {PVE}'s standard firewall rules +is enabled and the output can be observed in *Firewall* -> *Log*. +Further, only some dropped or rejected packets are logged for the standard rules +(see xref:pve_firewall_default_rules[default firewall rules]). + +`loglevel` does not affect how much of the filtered traffic is logged. It +changes a `LOGID` appended as prefix to the log output for easier filtering and +post-processing. + +`loglevel` is one of the following flags: + +[[pve_firewall_log_levels]] +[width="25%", options="header"] +|=================== +| loglevel | LOGID +| nolog | -- +| emerg | 0 +| alert | 1 +| crit | 2 +| err | 3 +| warning | 4 +| notice | 5 +| info | 6 +| debug | 7 +|=================== + +A typical firewall log output looks like this: + +---- +VMID LOGID CHAIN TIMESTAMP POLICY: PACKET_DETAILS +---- + +In case of the host firewall, `VMID` is equal to 0. + + +Logging of user defined firewall rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In order to log packets filtered by user-defined firewall rules, it is possible +to set a log-level parameter for each rule individually. +This allows to log in a fine grained manner and independent of the log-level +defined for the standard rules in *Firewall* -> *Options*. + +While the `loglevel` for each individual rule can be defined or changed easily +in the web UI during creation or modification of the rule, it is possible to set +this also via the corresponding `pvesh` API calls. + +Further, the log-level can also be set via the firewall configuration file by +appending a `-log ` to the selected rule (see +xref:pve_firewall_log_levels[possible log-levels]). + +For example, the following two are identical: + +---- +IN REJECT -p icmp -log nolog +IN REJECT -p icmp +---- + +whereas + +---- +IN REJECT -p icmp -log debug +---- + +produces a log output flagged with the `debug` level. + Tips and Tricks --------------- @@ -428,7 +563,7 @@ and add `ip_conntrack_ftp` to `/etc/modules` (so that it works after a reboot). Suricata IPS integration ~~~~~~~~~~~~~~~~~~~~~~~~ -If you want to use the http://suricata-ids.org/[Suricata IPS] +If you want to use the https://suricata.io/[Suricata IPS] (Intrusion Prevention System), it's possible. Packets will be forwarded to the IPS only after the firewall ACCEPTed @@ -476,7 +611,7 @@ address are used. By default the `NDP` option is enabled on both host and VM level to allow neighbor discovery (NDP) packets to be sent and received. Beside neighbor discovery NDP is also used for a couple of other things, like -autoconfiguration and advertising routers. +auto-configuration and advertising routers. By default VMs are allowed to send out router solicitation messages (to query for a router), and to receive router advertisement packets. This allows them to @@ -488,18 +623,199 @@ As for the link local addresses required for NDP, there's also an ``IP Filter'' (`ipfilter: 1`) option which can be enabled which has the same effect as adding an `ipfilter-net*` ipset for each of the VM's network interfaces containing the corresponding link local addresses. (See the -<> section for details.) +<> section for details.) Ports used by {pve} ------------------- -* Web interface: 8006 -* VNC Web console: 5900-5999 -* SPICE proxy: 3128 -* sshd (used for cluster actions): 22 -* rpcbind: 111 -* corosync multicast (if you run a cluster): 5404, 5405 UDP +* Web interface: 8006 (TCP, HTTP/1.1 over TLS) +* VNC Web console: 5900-5999 (TCP, WebSocket) +* SPICE proxy: 3128 (TCP) +* sshd (used for cluster actions): 22 (TCP) +* rpcbind: 111 (UDP) +* sendmail: 25 (TCP, outgoing) +* corosync cluster traffic: 5405-5412 UDP +* live migration (VM memory and local-disk data): 60000-60050 (TCP) + + +nftables +-------- + +As an alternative to `pve-firewall` we offer `proxmox-firewall`, which is an +implementation of the Proxmox VE firewall based on the newer +https://wiki.nftables.org/wiki-nftables/index.php/What_is_nftables%3F[nftables] +rather than iptables. + +WARNING: `proxmox-firewall` is currently in tech preview. There might be bugs or +incompatibilies with the original firewall. It is currently not suited for +production use. + +This implementation uses the same configuration files and configuration format, +so you can use your old configuration when switching. It provides the exact same +functionality with a few exceptions: + +* REJECT is currently not possible for guest traffic (traffic will instead be + dropped). +* Using the `NDP`, `Router Advertisement` or `DHCP` options will *always* create + firewall rules, irregardless of your default policy. +* firewall rules for guests are evaluated even for connections that have + conntrack table entries. + + +Installation and Usage +~~~~~~~~~~~~~~~~~~~~~~ + +Install the `proxmox-firewall` package: + +---- +apt install proxmox-firewall +---- + +Enable the nftables backend via the Web UI on your hosts (Host > Firewall > +Options > nftables), or by enabling it in the configuration file for your hosts +(`/etc/pve/nodes//host.fw`): + +---- +[OPTIONS] + +nftables: 1 +---- + +NOTE: After enabling/disabling `proxmox-firewall`, all running VMs and +containers need to be restarted for the old/new firewall to work properly. + +After setting the `nftables` configuration key, the new `proxmox-firewall` +service will take over. You can check if the new service is working by +checking the systemctl status of `proxmox-firewall`: + +---- +systemctl status proxmox-firewall +---- + +You can also examine the generated ruleset. You can find more information about +this in the section xref:pve_firewall_nft_helpful_commands[Helpful Commands]. +You should also check whether `pve-firewall` is no longer generating iptables +rules, you can find the respective commands in the +xref:pve_firewall_services_commands[Services and Commands] section. + +Switching back to the old firewall can be done by simply setting the +configuration value back to 0 / No. + +Usage +~~~~~ + +`proxmox-firewall` will create two tables that are managed by the +`proxmox-firewall` service: `proxmox-firewall` and `proxmox-firewall-guests`. If +you want to create custom rules that live outside the Proxmox VE firewall +configuration you can create your own tables to manage your custom firewall +rules. `proxmox-firewall` will only touch the tables it generates, so you can +easily extend and modify the behavior of the `proxmox-firewall` by adding your +own tables. + +Instead of using the `pve-firewall` command, the nftables-based firewall uses +`proxmox-firewall`. It is a systemd service, so you can start and stop it via +`systemctl`: + +---- +systemctl start proxmox-firewall +systemctl stop proxmox-firewall +---- + +Stopping the firewall service will remove all generated rules. + +To query the status of the firewall, you can query the status of the systemctl +service: + +---- +systemctl status proxmox-firewall +---- + + +[[pve_firewall_nft_helpful_commands]] +Helpful Commands +~~~~~~~~~~~~~~~~ +You can check the generated ruleset via the following command: + +---- +nft list ruleset +---- + +If you want to debug `proxmox-firewall` you can simply run the daemon in +foreground with the `RUST_LOG` environment variable set to `trace`. This should +provide you with detailed debugging output: + +---- +RUST_LOG=trace /usr/libexec/proxmox/proxmox-firewall +---- + +You can also edit the systemctl service if you want to have detailed output for +your firewall daemon: + +---- +systemctl edit proxmox-firewall +---- + +Then you need to add the override for the `RUST_LOG` environment variable: + +---- +[Service] +Environment="RUST_LOG=trace" +---- + +This will generate a large amount of logs very quickly, so only use this for +debugging purposes. Other, less verbose, log levels are `info` and `debug`. + +Running in foreground writes the log output to STDERR, so you can redirect it +with the following command (e.g. for submitting logs to the community forum): + +---- +RUST_LOG=trace /usr/libexec/proxmox/proxmox-firewall 2> firewall_log_$(hostname).txt +---- + +It can be helpful to trace packet flow through the different chains in order to +debug firewall rules. This can be achieved by setting `nftrace` to 1 for packets +that you want to track. It is advisable that you do not set this flag for *all* +packets, in the example below we only examine ICMP packets. + +---- +#!/usr/sbin/nft -f +table bridge tracebridge +delete table bridge tracebridge + +table bridge tracebridge { + chain trace { + meta l4proto icmp meta nftrace set 1 + } + + chain prerouting { + type filter hook prerouting priority -350; policy accept; + jump trace + } + + chain postrouting { + type filter hook postrouting priority -350; policy accept; + jump trace + } +} +---- + +Saving this file, making it executable, and then running it once will create the +respective tracing chains. You can then inspect the tracing output via the +Proxmox VE Web UI (Firewall > Log) or via `nft monitor trace`. + +The above example traces traffic on all bridges, which is usually where guest +traffic flows through. If you want to examine host traffic, create those chains +in the `inet` table instead of the `bridge` table. + +NOTE: Be aware that this can generate a *lot* of log spam and slow down the +performance of your networking stack significantly. + +You can remove the tracing rules via running the following command: + +---- +nft delete table bridge tracebridge +---- ifdef::manvolnum[]